OSPurge version 2
This commit is a whole new version of OSPurge. Currently OSPurge suffers from the following limitations: * It's slow (monothread) * It's not guaranteed to complete. If a resource fails to be deleted then OSPurge can choke on deleting other resources that depends on the first one. * Not properly unit tested. * Not modular (one huge file to deal with all services) This new version is: * Faster (multithreaded, thanks to a ThreadPoolExecutor) * Safe (we check and wait for some prerequisites before attempting a delete) * 100% unit tested. * Modular (one file per service) Note that it's Python3.5 compatible. It also uses OpenStack Shade and OpenStack client-config libraries so that OSPurge focuses on the cleaning logic only. Overall I believe this is a better version of OSPurge and more future proof. NOte that we tagged and released OSPurge 1.3 recently in case the new version was not satisfactory to everybody. Change-Id: I5eb92a0556df210ea3cb4e471b8db3b5bf7ed5ee
This commit is contained in:
parent
1403185a56
commit
3a612efd53
214
LICENSE
214
LICENSE
@ -1,21 +1,201 @@
|
|||||||
This software is released under the MIT License.
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
Copyright (c) 2014 Cloudwatt
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
1. Definitions.
|
||||||
of this software and associated documentation files (the "Software"), to deal
|
|
||||||
in the Software without restriction, including without limitation the rights
|
|
||||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
copies of the Software, and to permit persons to whom the Software is
|
|
||||||
furnished to do so, subject to the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
copies or substantial portions of the Software.
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
the copyright owner that is granting the License.
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
other entities that control, are controlled by, or are under common
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
control with that entity. For the purposes of this definition,
|
||||||
SOFTWARE.
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright {yyyy} {name of copyright owner}
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
366
README.rst
366
README.rst
@ -1,7 +1,10 @@
|
|||||||
OpenStack project resources cleaner
|
OpenStack project resources cleaner
|
||||||
===================================
|
===================================
|
||||||
|
|
||||||
* ``ospurge`` is a standalone, client-side, operators tool that aims at
|
What is OSPurge ?
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
* ``ospurge`` is a standalone client-side tool that aims at
|
||||||
deleting all resources, taking into account their interdependencies,
|
deleting all resources, taking into account their interdependencies,
|
||||||
in a specified OpenStack project.
|
in a specified OpenStack project.
|
||||||
|
|
||||||
@ -18,61 +21,70 @@ Supported resources
|
|||||||
|
|
||||||
At the moment it is possible to purge the following resources from a project:
|
At the moment it is possible to purge the following resources from a project:
|
||||||
|
|
||||||
* Ceilometer alarms
|
* Floating IP
|
||||||
* floating IP addresses
|
* Glance Images
|
||||||
* images / snapshots
|
* Instances
|
||||||
* instances
|
* Networks
|
||||||
* networks
|
* Routers
|
||||||
* routers
|
* Security groups
|
||||||
* security groups
|
|
||||||
* Swift containers
|
* Swift containers
|
||||||
* Swift objects
|
* Swift objects
|
||||||
* volumes / snapshots
|
* Volumes / Volume snapshots / Volume backups
|
||||||
|
|
||||||
|
|
||||||
Error codes
|
Exit codes
|
||||||
-----------
|
----------
|
||||||
|
|
||||||
The following error codes are returned when ``ospurge`` encounters
|
The following codes are returned when ``ospurge`` exits:
|
||||||
an error:
|
|
||||||
|
|
||||||
* ``Code 0``: Process exited sucessfully
|
* ``Code 0``: Process exited successfully
|
||||||
* ``Code 1``: Unknown error
|
* ``Code 1``: Something went wrong (check the logs)
|
||||||
* ``Code 2``: Project doesn't exist
|
|
||||||
* ``Code 3``: Authentication failed (e.g. bad username or password)
|
|
||||||
* ``Code 4``: Resource deletion failed
|
|
||||||
* ``Code 5``: Connection error while deleting a resource (e.g. service not
|
|
||||||
available)
|
|
||||||
* ``Code 6``: Connection to endpoint failed (e.g. wrong authentication URL)
|
|
||||||
|
|
||||||
|
|
||||||
Installation
|
Installation
|
||||||
------------
|
------------
|
||||||
|
|
||||||
Create a Python virtual environment (requires the
|
Create a Python 3 virtual environment:
|
||||||
`virtualenvwrapper <https://virtualenvwrapper.readthedocs.org/>`_):
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ mkvirtualenv ospurge
|
$ python3 -m venv ospurge
|
||||||
|
$ source ospurge/bin/activate
|
||||||
|
|
||||||
Install ``ospurge`` with ``pip``:
|
Install ``ospurge`` with ``pip``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ pip install ospurge
|
$ python3 -m pip install git+https://git.openstack.org/openstack/ospurge
|
||||||
|
$ OR, to checkout at commit 328f6
|
||||||
|
$ python3 -m pip install git+https://git.openstack.org/openstack/ospurge@328f6
|
||||||
|
|
||||||
Available options can be displayed by using ``ospurge -h``:
|
Available options can be displayed with ``ospurge -h``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ ospurge -h
|
$ ospurge -h
|
||||||
usage: ospurge [-h] [--verbose] [--dry-run] [--dont-delete-project]
|
usage: ospurge [-h] [--verbose] [--dry-run] [--delete-shared-resources]
|
||||||
[--region-name REGION_NAME] [--endpoint-type ENDPOINT_TYPE]
|
(--purge-project ID_OR_NAME | --purge-own-project)
|
||||||
--username USERNAME --password PASSWORD --admin-project
|
[--os-cloud <name>] [--os-auth-type <name>]
|
||||||
ADMIN_PROJECT [--admin-role-name ADMIN_ROLE_NAME] --auth-url
|
[--os-auth-url OS_AUTH_URL] [--os-domain-id OS_DOMAIN_ID]
|
||||||
AUTH_URL [--cleanup-project CLEANUP_PROJECT] [--own-project]
|
[--os-domain-name OS_DOMAIN_NAME]
|
||||||
[--insecure]
|
[--os-project-id OS_PROJECT_ID]
|
||||||
|
[--os-project-name OS_PROJECT_NAME]
|
||||||
|
[--os-project-domain-id OS_PROJECT_DOMAIN_ID]
|
||||||
|
[--os-project-domain-name OS_PROJECT_DOMAIN_NAME]
|
||||||
|
[--os-trust-id OS_TRUST_ID]
|
||||||
|
[--os-default-domain-id OS_DEFAULT_DOMAIN_ID]
|
||||||
|
[--os-default-domain-name OS_DEFAULT_DOMAIN_NAME]
|
||||||
|
[--os-user-id OS_USER_ID] [--os-username OS_USERNAME]
|
||||||
|
[--os-user-domain-id OS_USER_DOMAIN_ID]
|
||||||
|
[--os-user-domain-name OS_USER_DOMAIN_NAME]
|
||||||
|
[--os-password OS_PASSWORD] [--insecure]
|
||||||
|
[--os-cacert <ca-certificate>] [--os-cert <certificate>]
|
||||||
|
[--os-key <key>] [--timeout <seconds>]
|
||||||
|
[--os-service-type <name>] [--os-service-name <name>]
|
||||||
|
[--os-interface <name>] [--os-region-name <name>]
|
||||||
|
[--os-endpoint-override <name>] [--os-api-version <name>]
|
||||||
|
|
||||||
Purge resources from an Openstack project.
|
Purge resources from an Openstack project.
|
||||||
|
|
||||||
@ -80,49 +92,105 @@ Available options can be displayed by using ``ospurge -h``:
|
|||||||
-h, --help show this help message and exit
|
-h, --help show this help message and exit
|
||||||
--verbose Makes output verbose
|
--verbose Makes output verbose
|
||||||
--dry-run List project's resources
|
--dry-run List project's resources
|
||||||
--dont-delete-project
|
--delete-shared-resources
|
||||||
Executes cleanup script without removing the project.
|
Whether to delete shared resources (public images and
|
||||||
Warning: all project resources will still be deleted.
|
external networks)
|
||||||
--region-name REGION_NAME
|
|
||||||
Region to use. Defaults to env[OS_REGION_NAME] or None
|
|
||||||
--endpoint-type ENDPOINT_TYPE
|
|
||||||
Endpoint type to use. Defaults to
|
|
||||||
env[OS_ENDPOINT_TYPE] or publicURL
|
|
||||||
--username USERNAME If --own-project is set : a user name with access to
|
|
||||||
the project being purged. If --cleanup-project is set
|
|
||||||
: a user name with admin role in project specified in
|
|
||||||
--admin-project. Defaults to env[OS_USERNAME]
|
|
||||||
--password PASSWORD The user's password. Defaults to env[OS_PASSWORD].
|
|
||||||
--admin-project ADMIN_PROJECT
|
|
||||||
Project name used for authentication. This project
|
|
||||||
will be purged if --own-project is set. Defaults to
|
|
||||||
env[OS_TENANT_NAME].
|
|
||||||
--admin-role-name ADMIN_ROLE_NAME
|
--admin-role-name ADMIN_ROLE_NAME
|
||||||
Name of admin role. Defaults to 'admin'.
|
Name of admin role. Defaults to 'admin'. This role
|
||||||
--auth-url AUTH_URL Authentication URL. Defaults to env[OS_AUTH_URL].
|
will be temporarily granted on the project to purge to
|
||||||
--cleanup-project CLEANUP_PROJECT
|
the authenticated user.
|
||||||
ID or Name of project to purge. Not required if --own-
|
--purge-project ID_OR_NAME
|
||||||
project has been set. Using --cleanup-project requires
|
ID or Name of project to purge. This option requires
|
||||||
to authenticate with admin credentials.
|
to authenticate with admin credentials.
|
||||||
--own-project Delete resources of the project used to authenticate.
|
--purge-own-project Purge resources of the project used to authenticate.
|
||||||
Useful if you don't have the admin credentials of the
|
Useful if you don't have the admin credentials of the
|
||||||
platform.
|
cloud.
|
||||||
--insecure Explicitly allow all OpenStack clients to perform
|
--os-cloud <name> Named cloud to connect to
|
||||||
insecure SSL (https) requests. The server's
|
--os-auth-type <name>, --os-auth-plugin <name>
|
||||||
certificate will not be verified against any
|
Authentication type to use
|
||||||
certificate authorities. This option should be used
|
|
||||||
with caution.
|
Authentication Options:
|
||||||
|
Options specific to the password plugin.
|
||||||
|
|
||||||
|
--os-auth-url OS_AUTH_URL
|
||||||
|
Authentication URL
|
||||||
|
--os-domain-id OS_DOMAIN_ID
|
||||||
|
Domain ID to scope to
|
||||||
|
--os-domain-name OS_DOMAIN_NAME
|
||||||
|
Domain name to scope to
|
||||||
|
--os-project-id OS_PROJECT_ID, --os-tenant-id OS_PROJECT_ID
|
||||||
|
Project ID to scope to
|
||||||
|
--os-project-name OS_PROJECT_NAME, --os-tenant-name OS_PROJECT_NAME
|
||||||
|
Project name to scope to
|
||||||
|
--os-project-domain-id OS_PROJECT_DOMAIN_ID
|
||||||
|
Domain ID containing project
|
||||||
|
--os-project-domain-name OS_PROJECT_DOMAIN_NAME
|
||||||
|
Domain name containing project
|
||||||
|
--os-trust-id OS_TRUST_ID
|
||||||
|
Trust ID
|
||||||
|
--os-default-domain-id OS_DEFAULT_DOMAIN_ID
|
||||||
|
Optional domain ID to use with v3 and v2 parameters.
|
||||||
|
It will be used for both the user and project domain
|
||||||
|
in v3 and ignored in v2 authentication.
|
||||||
|
--os-default-domain-name OS_DEFAULT_DOMAIN_NAME
|
||||||
|
Optional domain name to use with v3 API and v2
|
||||||
|
parameters. It will be used for both the user and
|
||||||
|
project domain in v3 and ignored in v2 authentication.
|
||||||
|
--os-user-id OS_USER_ID
|
||||||
|
User id
|
||||||
|
--os-username OS_USERNAME, --os-user-name OS_USERNAME
|
||||||
|
Username
|
||||||
|
--os-user-domain-id OS_USER_DOMAIN_ID
|
||||||
|
User's domain id
|
||||||
|
--os-user-domain-name OS_USER_DOMAIN_NAME
|
||||||
|
User's domain name
|
||||||
|
--os-password OS_PASSWORD
|
||||||
|
User's password
|
||||||
|
|
||||||
|
API Connection Options:
|
||||||
|
Options controlling the HTTP API Connections
|
||||||
|
|
||||||
|
--insecure Explicitly allow client to perform "insecure" TLS
|
||||||
|
(https) requests. The server's certificate will not be
|
||||||
|
verified against any certificate authorities. This
|
||||||
|
option should be used with caution.
|
||||||
|
--os-cacert <ca-certificate>
|
||||||
|
Specify a CA bundle file to use in verifying a TLS
|
||||||
|
(https) server certificate. Defaults to
|
||||||
|
env[OS_CACERT].
|
||||||
|
--os-cert <certificate>
|
||||||
|
Defaults to env[OS_CERT].
|
||||||
|
--os-key <key> Defaults to env[OS_KEY].
|
||||||
|
--timeout <seconds> Set request timeout (in seconds).
|
||||||
|
|
||||||
|
Service Options:
|
||||||
|
Options controlling the specialization of the API Connection from
|
||||||
|
information found in the catalog
|
||||||
|
|
||||||
|
--os-service-type <name>
|
||||||
|
Service type to request from the catalog
|
||||||
|
--os-service-name <name>
|
||||||
|
Service name to request from the catalog
|
||||||
|
--os-interface <name>
|
||||||
|
API Interface to use [public, internal, admin]
|
||||||
|
--os-region-name <name>
|
||||||
|
Region of the cloud to use
|
||||||
|
--os-endpoint-override <name>
|
||||||
|
Endpoint to use instead of the endpoint in the catalog
|
||||||
|
--os-api-version <name>
|
||||||
|
Which version of the service API to use
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Example usage
|
Example usage
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
To remove a project, credentials have to be
|
To remove a project, credentials have to be provided. The usual OpenStack
|
||||||
provided. The usual OpenStack environment variables can be used. When
|
environment variables can be used. When launching the ``ospurge`` script, the
|
||||||
launching the ``ospurge`` script, the project to be cleaned up has
|
project to be cleaned up has to be provided, by using either the
|
||||||
to be provided, by using either the ``--cleanup-project`` option or the
|
``--purge-project`` option or the ``--purge-own-project`` option. When the
|
||||||
``--own-project`` option. When the command returns, any resources associated
|
command returns, any resources that belong to the project will have been
|
||||||
to the project will have been definitively deleted.
|
definitively deleted.
|
||||||
|
|
||||||
* Setting OpenStack credentials:
|
* Setting OpenStack credentials:
|
||||||
|
|
||||||
@ -133,110 +201,38 @@ to the project will have been definitively deleted.
|
|||||||
$ export OS_TENANT_NAME=admin
|
$ export OS_TENANT_NAME=admin
|
||||||
$ export OS_AUTH_URL=http://localhost:5000/v2.0
|
$ export OS_AUTH_URL=http://localhost:5000/v2.0
|
||||||
|
|
||||||
* Checking resources of the target project:
|
* Removing resources:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ ./ospurge --dry-run --cleanup-project demo
|
$ ./ospurge --verbose --purge-project demo
|
||||||
* Resources type: CinderSnapshots
|
WARNING:root:2016-10-27 20:59:12,001:Going to list and/or delete resources from project 'demo'
|
||||||
|
INFO:root:2016-10-27 20:59:12,426:Going to delete VM (id='be1cce96-fd4c-49fc-9029-db410d376258', name='cb63bb6c-de93-4213-9998-68c2a532018a')
|
||||||
|
INFO:root:2016-10-27 20:59:12,967:Waiting for check_prerequisite() in FloatingIPs
|
||||||
|
INFO:root:2016-10-27 20:59:15,169:Waiting for check_prerequisite() in FloatingIPs
|
||||||
|
INFO:root:2016-10-27 20:59:19,258:Going to delete Floating IP (id='14846ada-334a-4447-8763-829364bb0d18')
|
||||||
|
INFO:root:2016-10-27 20:59:19,613:Going to delete Snapshot (id='2e7aa42f-5596-49bf-976a-e572e6c96224', name='cb63bb6c-de93-4213-9998-68c2a532018a')
|
||||||
|
INFO:root:2016-10-27 20:59:19,953:Going to delete Volume Backup (id='64a8b6d8-021e-4680-af58-0a5a04d29ed2', name='cb63bb6c-de93-4213-9998-68c2a532018a'
|
||||||
|
INFO:root:2016-10-27 20:59:20,717:Going to delete Router Interface (id='7240a5df-eb83-447b-8966-f7ad2a583bb9', router_id='7057d141-29c7-4596-8312-16b441012083')
|
||||||
|
INFO:root:2016-10-27 20:59:27,009:Going to delete Router Interface (id='fbae389d-ff69-4649-95cb-5ec8a8a64d03', router_id='7057d141-29c7-4596-8312-16b441012083')
|
||||||
|
INFO:root:2016-10-27 20:59:28,672:Going to delete Router (id='7057d141-29c7-4596-8312-16b441012083', name='router1')
|
||||||
|
INFO:root:2016-10-27 20:59:31,365:Going to delete Port (id='09e452bf-804d-489a-889c-be0eda7ecbca', network_id='e282fc84-7c79-4d47-a94c-b74f7a775682)'
|
||||||
|
INFO:root:2016-10-27 20:59:32,398:Going to delete Security Group (id='7028fbd2-c998-428d-8d41-28293c3de052', name='6256fb6c-0118-4f18-8424-0f68aadb9457')
|
||||||
|
INFO:root:2016-10-27 20:59:33,668:Going to delete Network (id='dd33dd12-4c3e-4162-8a5c-23941922271f', name='private')
|
||||||
|
INFO:root:2016-10-27 20:59:36,119:Going to delete Image (id='39df8b40-3acd-404c-935c-d9f15732dfa6', name='cb63bb6c-de93-4213-9998-68c2a532018a')
|
||||||
|
INFO:root:2016-10-27 20:59:36,953:Going to delete Volume (id='f482283a-25a9-419e-af92-81ec8c62e1cd', name='cb63bb6c-de93-4213-9998-68c2a532018a')
|
||||||
|
INFO:root:2016-10-27 20:59:48,790:Going to delete Object 'cb63bb6c-de93-4213-9998-68c2a532018a.raw' from Container 'cb63bb6c-de93-4213-9998-68c2a532018a'
|
||||||
|
INFO:root:2016-10-27 20:59:48,895:Going to delete Container (name='6256fb6c-0118-4f18-8424-0f68aadb9457')
|
||||||
|
INFO:root:2016-10-27 20:59:48,921:Going to delete Container (name='volumebackups')
|
||||||
|
|
||||||
* Resources type: NovaServers
|
* Projects can be deleted with the ``python-openstackclient`` command-line
|
||||||
server vm0 (id 8b0896d9-bcf3-4360-824a-a81865ad2385)
|
interface:
|
||||||
|
|
||||||
* Resources type: NeutronFloatingIps
|
|
||||||
|
|
||||||
* Resources type: NeutronInterfaces
|
|
||||||
|
|
||||||
* Resources type: NeutronRouters
|
|
||||||
|
|
||||||
* Resources type: NeutronNetworks
|
|
||||||
|
|
||||||
* Resources type: NeutronSecgroups
|
|
||||||
security group custom (id 8c13e635-6fdc-4332-ba19-c22a7a85c7cc)
|
|
||||||
|
|
||||||
* Resources type: GlanceImages
|
|
||||||
|
|
||||||
* Resources type: SwiftObjects
|
|
||||||
|
|
||||||
* Resources type: SwiftContainers
|
|
||||||
|
|
||||||
* Resources type: CinderVolumes
|
|
||||||
volume vol0 (id ce1380ef-2d66-47a2-9dbf-8dd5d9cd506d)
|
|
||||||
|
|
||||||
* Resources type: CeilometerAlarms
|
|
||||||
|
|
||||||
* Removing resources without deleting the project:
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ ./ospurge --verbose --dont-delete-project --cleanup-project demo
|
$ openstack project delete <project>
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): keystone.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:root:* Granting role admin to user e7f562a29da3492baba2cc7c5a1f2d84 on project demo.
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): keystone-admin.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): keystone-admin.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): keystone-admin.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): keystone.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:root:* Purging CinderSnapshots
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): keystone.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): cinder.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:root:* Purging NovaServers
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): keystone.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): nova.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:root:* Deleting server vm0 (id 8b0896d9-bcf3-4360-824a-a81865ad2385).
|
|
||||||
INFO:root:* Purging NeutronFloatingIps
|
|
||||||
INFO:root:* Purging NeutronInterfaces
|
|
||||||
INFO:root:* Purging NeutronRouters
|
|
||||||
INFO:root:* Purging NeutronNetworks
|
|
||||||
INFO:root:* Purging NeutronSecgroups
|
|
||||||
INFO:root:* Deleting security group custom (id 8c13e635-6fdc-4332-ba19-c22a7a85c7cc).
|
|
||||||
INFO:root:* Purging GlanceImages
|
|
||||||
INFO:root:* Purging SwiftObjects
|
|
||||||
INFO:root:* Purging SwiftContainers
|
|
||||||
INFO:root:* Purging CinderVolumes
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): keystone.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): cinder.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:root:* Deleting volume vol0 (id ce1380ef-2d66-47a2-9dbf-8dd5d9cd506d).
|
|
||||||
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): cinder.usr.lab0.aub.cw-labs.net
|
|
||||||
INFO:root:* Purging CeilometerAlarms
|
|
||||||
|
|
||||||
* Checking that resources have been correctly removed:
|
* Users can be deleted with the ``python-openstackclient`` command-line
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ ./ospurge --dry-run --cleanup-project demo
|
|
||||||
* Resources type: CinderSnapshots
|
|
||||||
|
|
||||||
* Resources type: NovaServers
|
|
||||||
|
|
||||||
* Resources type: NeutronFloatingIps
|
|
||||||
|
|
||||||
* Resources type: NeutronInterfaces
|
|
||||||
|
|
||||||
* Resources type: NeutronRouters
|
|
||||||
|
|
||||||
* Resources type: NeutronNetworks
|
|
||||||
|
|
||||||
* Resources type: NeutronSecgroups
|
|
||||||
|
|
||||||
* Resources type: GlanceImages
|
|
||||||
|
|
||||||
* Resources type: SwiftObjects
|
|
||||||
|
|
||||||
* Resources type: SwiftContainers
|
|
||||||
|
|
||||||
* Resources type: CinderVolumes
|
|
||||||
|
|
||||||
* Resources type: CeilometerAlarms
|
|
||||||
|
|
||||||
* Removing project:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ ./ospurge --cleanup-project demo
|
|
||||||
$ ./ospurge --cleanup-project demo
|
|
||||||
Project demo doesn't exist
|
|
||||||
|
|
||||||
* Users can be deleted by using the ``python-openstackclient`` command-line
|
|
||||||
interface:
|
interface:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
@ -244,10 +240,48 @@ to the project will have been definitively deleted.
|
|||||||
$ openstack user delete <user>
|
$ openstack user delete <user>
|
||||||
|
|
||||||
|
|
||||||
|
How to extend
|
||||||
|
-------------
|
||||||
|
|
||||||
|
Given the ever-widening OpenStack ecosystem, OSPurge can't support every
|
||||||
|
OpenStack services. We intend to support in-tree, only the 'core' services.
|
||||||
|
Fortunately, OSPurge is easily extensible. All you have to do is add a new
|
||||||
|
Python module in the ``resources`` package and define one or more Python
|
||||||
|
class(es) that subclass ``ospurge.resources.base.ServiceResource``. Your module
|
||||||
|
will automatically be loaded and your methods called. Have a look at the
|
||||||
|
``main.main`` and ``main.runner`` functions to fully understand the mechanism.
|
||||||
|
|
||||||
|
Note: We won't accept any patch that broaden what OSPurge supports, beyond
|
||||||
|
the core services.
|
||||||
|
|
||||||
|
|
||||||
How to contribute
|
How to contribute
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
OSpurge is hosted on the OpenStack infrastructure and is using
|
OSPurge is hosted on the OpenStack infrastructure and is using
|
||||||
`Gerrit <https://review.openstack.org>`_ to manage contributions. You can
|
`Gerrit <https://review.openstack.org/#/q/project:openstack/ospurge>`_ to
|
||||||
contribute to the project by following the
|
manage contributions. You can contribute to the project by following the
|
||||||
`OpenStack Development workflow <http://docs.openstack.org/infra/manual/developers.html#development-workflow>`_.
|
`OpenStack Development workflow <http://docs.openstack.org/infra/manual/developers.html#development-workflow>`_.
|
||||||
|
|
||||||
|
Start hacking right away with:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ git clone git://git.openstack.org/openstack/ospurge
|
||||||
|
|
||||||
|
|
||||||
|
Design decisions
|
||||||
|
----------------
|
||||||
|
* OSPurge depends on `os-client-config`_ to manage authentication. This way,
|
||||||
|
environment variables (OS_*) and CLI options are properly handled.
|
||||||
|
|
||||||
|
* OSPurge is built on top of `shade`_. shade is a simple client library for
|
||||||
|
interacting with OpenStack clouds. With shade, OSPurge can focus on the
|
||||||
|
cleaning resources logic and not on properly building the various Python
|
||||||
|
OpenStack clients and dealing with their not-so-intuitive API.
|
||||||
|
|
||||||
|
.. _shade: https://github.com/openstack-infra/shade/
|
||||||
|
.. _os-client-config: https://github.com/openstack/os-client-config
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -11,8 +11,6 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import openstackdocstheme
|
import openstackdocstheme
|
||||||
|
|
||||||
copyright = u'2015, OpenStack contributors'
|
copyright = u'2015, OpenStack contributors'
|
||||||
|
@ -1 +0,0 @@
|
|||||||
.. include:: ../../README.rst
|
|
@ -0,0 +1,14 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import pbr.version
|
||||||
|
|
||||||
|
__version__ = pbr.version.VersionInfo('ospurge').version_string_with_vcs()
|
168
ospurge/base.py
168
ospurge/base.py
@ -1,168 +0,0 @@
|
|||||||
# This software is released under the MIT License.
|
|
||||||
#
|
|
||||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
# of this software and associated documentation files (the "Software"), to deal
|
|
||||||
# in the Software without restriction, including without limitation the rights
|
|
||||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
# copies of the Software, and to permit persons to whom the Software is
|
|
||||||
# furnished to do so, subject to the following conditions:
|
|
||||||
#
|
|
||||||
# The above copyright notice and this permission notice shall be included in all
|
|
||||||
# copies or substantial portions of the Software.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import time
|
|
||||||
|
|
||||||
from keystoneauth1 import exceptions as api_exceptions
|
|
||||||
from keystoneauth1.identity import generic as keystone_auth
|
|
||||||
from keystoneauth1 import session as keystone_session
|
|
||||||
from keystoneclient import client as keystone_client
|
|
||||||
from keystoneclient import exceptions as keystone_exceptions
|
|
||||||
|
|
||||||
from ospurge import constants
|
|
||||||
from ospurge import exceptions
|
|
||||||
|
|
||||||
# Decorators
|
|
||||||
|
|
||||||
|
|
||||||
def retry(service_name):
|
|
||||||
def factory(func):
|
|
||||||
"""Decorator allowing to retry in case of failure."""
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
n = 0
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
return func(*args, **kwargs)
|
|
||||||
except Exception as e:
|
|
||||||
if getattr(e, 'http_status', False) == 404:
|
|
||||||
# Sometimes a resource can be deleted manually by
|
|
||||||
# someone else while ospurge is running and listed it.
|
|
||||||
# If this happens, We raise a Warning.
|
|
||||||
logging.warning(
|
|
||||||
"Can not delete the resource because it does not"
|
|
||||||
" exist : %s", e
|
|
||||||
)
|
|
||||||
# No need to retry deleting an non existing resource
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
if n == constants.RETRIES:
|
|
||||||
raise exceptions.DeletionFailed(service_name)
|
|
||||||
n += 1
|
|
||||||
logging.info("* Deletion failed - "
|
|
||||||
"Retrying in %s seconds - "
|
|
||||||
"Retry count %s", constants.TIMEOUT, n)
|
|
||||||
time.sleep(constants.TIMEOUT)
|
|
||||||
return wrapper
|
|
||||||
return factory
|
|
||||||
|
|
||||||
|
|
||||||
# Classes
|
|
||||||
|
|
||||||
|
|
||||||
class Session(object):
|
|
||||||
|
|
||||||
"""A Session stores information that can be used by the different Openstack Clients.
|
|
||||||
|
|
||||||
The most important data is:
|
|
||||||
* self.token - The Openstack token to be used accross services;
|
|
||||||
* self.catalog - Allowing to retrieve services' endpoints.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, username, password, project_id, auth_url,
|
|
||||||
endpoint_type="publicURL", insecure=False, **kwargs):
|
|
||||||
|
|
||||||
data = {
|
|
||||||
'username': username,
|
|
||||||
'password': password,
|
|
||||||
'project_id': project_id,
|
|
||||||
'user_domain_id': kwargs.get('user_domain_id'),
|
|
||||||
'user_domain_name': kwargs.get('user_domain_name'),
|
|
||||||
'project_domain_id': kwargs.get('project_domain_id'),
|
|
||||||
'project_domain_name': kwargs.get('project_domain_name'),
|
|
||||||
'domain_id': kwargs.get('domain_id')
|
|
||||||
}
|
|
||||||
|
|
||||||
auth = keystone_auth.Password(auth_url, **data)
|
|
||||||
session = keystone_session.Session(auth=auth, verify=(not insecure))
|
|
||||||
self.client = keystone_client.Client(session=session)
|
|
||||||
|
|
||||||
# Storing username, password, project_id and auth_url for
|
|
||||||
# use by clients libraries that cannot use an existing token.
|
|
||||||
self.username = username
|
|
||||||
self.password = password
|
|
||||||
self.project_id = auth.auth_ref.project_id
|
|
||||||
self.auth_url = auth_url
|
|
||||||
self.region_name = kwargs['region_name']
|
|
||||||
self.insecure = insecure
|
|
||||||
# Session variables to be used by clients when possible
|
|
||||||
self.token = auth.auth_ref.auth_token
|
|
||||||
self.user_id = auth.auth_ref.user_id
|
|
||||||
self.project_name = self.client.project_name
|
|
||||||
self.keystone_session = session
|
|
||||||
self.endpoint_type = endpoint_type
|
|
||||||
self.catalog = auth.auth_ref.service_catalog.get_endpoints()
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Detect if we are admin or not
|
|
||||||
self.client.roles.list() # Only admins are allowed to do this
|
|
||||||
except (
|
|
||||||
# The Exception Depends on OpenStack Infrastructure.
|
|
||||||
api_exceptions.Forbidden,
|
|
||||||
keystone_exceptions.ConnectionRefused, # admin URL not permitted
|
|
||||||
api_exceptions.Unauthorized,
|
|
||||||
):
|
|
||||||
self.is_admin = False
|
|
||||||
else:
|
|
||||||
self.is_admin = True
|
|
||||||
|
|
||||||
def get_endpoint(self, service_type):
|
|
||||||
try:
|
|
||||||
if self.client.version == "v2.0":
|
|
||||||
return self.catalog[service_type][0][self.endpoint_type]
|
|
||||||
else:
|
|
||||||
return self.catalog[service_type][0]['url']
|
|
||||||
except (KeyError, IndexError):
|
|
||||||
# Endpoint could not be found
|
|
||||||
raise exceptions.EndpointNotFound(service_type)
|
|
||||||
|
|
||||||
|
|
||||||
class Resources(object):
|
|
||||||
|
|
||||||
"""Abstract base class for all resources to be removed."""
|
|
||||||
|
|
||||||
def __init__(self, session):
|
|
||||||
self.session = session
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def delete(self, resource):
|
|
||||||
"""Displays informational message about a resource deletion."""
|
|
||||||
logging.info("* Deleting %s.", self.resource_str(resource))
|
|
||||||
|
|
||||||
def purge(self):
|
|
||||||
"""Delete all resources."""
|
|
||||||
# Purging is displayed and done only if self.list succeeds
|
|
||||||
resources = self.list()
|
|
||||||
c_name = self.__class__.__name__
|
|
||||||
logging.info("* Purging %s", c_name)
|
|
||||||
for resource in resources:
|
|
||||||
retry(c_name)(self.delete)(resource)
|
|
||||||
|
|
||||||
def dump(self):
|
|
||||||
"""Display all available resources."""
|
|
||||||
# Resources type and resources are displayed only if self.list succeeds
|
|
||||||
resources = self.list()
|
|
||||||
c_name = self.__class__.__name__
|
|
||||||
print("* Resources type: %s" % c_name)
|
|
||||||
for resource in resources:
|
|
||||||
print(self.resource_str(resource))
|
|
||||||
print("")
|
|
@ -1,872 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# -*- encoding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# This software is released under the MIT License.
|
|
||||||
#
|
|
||||||
# Copyright (c) 2014 Cloudwatt
|
|
||||||
#
|
|
||||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
# of this software and associated documentation files (the "Software"), to deal
|
|
||||||
# in the Software without restriction, including without limitation the rights
|
|
||||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
# copies of the Software, and to permit persons to whom the Software is
|
|
||||||
# furnished to do so, subject to the following conditions:
|
|
||||||
#
|
|
||||||
# The above copyright notice and this permission notice shall be included in all
|
|
||||||
# copies or substantial portions of the Software.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
from distutils import version
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import ceilometerclient.exc
|
|
||||||
from ceilometerclient.v2 import client as ceilometer_client
|
|
||||||
import cinderclient
|
|
||||||
from cinderclient.v1 import client as cinder_client
|
|
||||||
import glanceclient.exc
|
|
||||||
from glanceclient.v1 import client as glance_client
|
|
||||||
from heatclient import client as heat_client
|
|
||||||
import heatclient.openstack.common.apiclient.exceptions
|
|
||||||
from keystoneauth1 import exceptions as api_exceptions
|
|
||||||
from keystoneauth1.identity import generic as keystone_auth
|
|
||||||
from keystoneauth1 import session as keystone_session
|
|
||||||
from keystoneclient import client as keystone_client
|
|
||||||
import neutronclient.common.exceptions
|
|
||||||
from neutronclient.v2_0 import client as neutron_client
|
|
||||||
from novaclient import client as nova_client
|
|
||||||
import novaclient.exceptions
|
|
||||||
import requests
|
|
||||||
from swiftclient import client as swift_client
|
|
||||||
|
|
||||||
from ospurge import base
|
|
||||||
from ospurge import constants
|
|
||||||
from ospurge import exceptions
|
|
||||||
|
|
||||||
|
|
||||||
class SwiftResources(base.Resources):
|
|
||||||
|
|
||||||
def __init__(self, session):
|
|
||||||
super(SwiftResources, self).__init__(session)
|
|
||||||
self.endpoint = self.session.get_endpoint("object-store")
|
|
||||||
self.token = self.session.token
|
|
||||||
conn = swift_client.HTTPConnection(self.endpoint,
|
|
||||||
insecure=self.session.insecure)
|
|
||||||
self.http_conn = conn.parsed_url, conn
|
|
||||||
|
|
||||||
# This method is used to retrieve Objects as well as Containers.
|
|
||||||
def list_containers(self):
|
|
||||||
containers = swift_client.get_account(self.endpoint,
|
|
||||||
self.token,
|
|
||||||
http_conn=self.http_conn)[1]
|
|
||||||
return (cont['name'] for cont in containers)
|
|
||||||
|
|
||||||
|
|
||||||
class SwiftObjects(SwiftResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
swift_objects = []
|
|
||||||
for cont in self.list_containers():
|
|
||||||
objs = [{'container': cont, 'name': obj['name']} for obj in
|
|
||||||
swift_client.get_container(self.endpoint,
|
|
||||||
self.token,
|
|
||||||
cont,
|
|
||||||
http_conn=self.http_conn)[1]]
|
|
||||||
swift_objects.extend(objs)
|
|
||||||
return swift_objects
|
|
||||||
|
|
||||||
def delete(self, obj):
|
|
||||||
super(SwiftObjects, self).delete(obj)
|
|
||||||
swift_client.delete_object(self.endpoint, token=self.token, http_conn=self.http_conn,
|
|
||||||
container=obj['container'], name=obj['name'])
|
|
||||||
|
|
||||||
def resource_str(self, obj):
|
|
||||||
return u"object {} in container {}".format(obj['name'], obj['container'])
|
|
||||||
|
|
||||||
|
|
||||||
class SwiftContainers(SwiftResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return self.list_containers()
|
|
||||||
|
|
||||||
def delete(self, container):
|
|
||||||
"""Container must be empty for deletion to succeed."""
|
|
||||||
super(SwiftContainers, self).delete(container)
|
|
||||||
swift_client.delete_container(self.endpoint, self.token, container, http_conn=self.http_conn)
|
|
||||||
|
|
||||||
def resource_str(self, obj):
|
|
||||||
return u"container {}".format(obj)
|
|
||||||
|
|
||||||
|
|
||||||
class CinderResources(base.Resources):
|
|
||||||
|
|
||||||
def __init__(self, session):
|
|
||||||
super(CinderResources, self).__init__(session)
|
|
||||||
self.client = cinder_client.Client("2.1",
|
|
||||||
session=session.keystone_session)
|
|
||||||
|
|
||||||
|
|
||||||
class CinderSnapshots(CinderResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return self.client.volume_snapshots.list()
|
|
||||||
|
|
||||||
def delete(self, snap):
|
|
||||||
super(CinderSnapshots, self).delete(snap)
|
|
||||||
self.client.volume_snapshots.delete(snap)
|
|
||||||
|
|
||||||
def resource_str(self, snap):
|
|
||||||
return u"snapshot {} (id {})".format(snap.display_name, snap.id)
|
|
||||||
|
|
||||||
|
|
||||||
class CinderVolumes(CinderResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return self.client.volumes.list()
|
|
||||||
|
|
||||||
def delete(self, vol):
|
|
||||||
"""Snapshots created from the volume must be deleted first."""
|
|
||||||
super(CinderVolumes, self).delete(vol)
|
|
||||||
self.client.volumes.delete(vol)
|
|
||||||
|
|
||||||
def resource_str(self, vol):
|
|
||||||
return u"volume {} (id {})".format(vol.display_name, vol.id)
|
|
||||||
|
|
||||||
|
|
||||||
class CinderBackups(CinderResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
if self.session.is_admin and version.LooseVersion(
|
|
||||||
cinderclient.version_info.version_string()) < '1.4.0':
|
|
||||||
logging.warning('cinder volume-backups are ignored when ospurge is '
|
|
||||||
'launched with admin credentials because of the '
|
|
||||||
'following bug: '
|
|
||||||
'https://bugs.launchpad.net/python-cinderclient/+bug/1422046')
|
|
||||||
return []
|
|
||||||
return self.client.backups.list()
|
|
||||||
|
|
||||||
def delete(self, backup):
|
|
||||||
super(CinderBackups, self).delete(backup)
|
|
||||||
self.client.backups.delete(backup)
|
|
||||||
|
|
||||||
def resource_str(self, backup):
|
|
||||||
return u"backup {} (id {}) of volume {}".format(backup.name, backup.id, backup.volume_id)
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronResources(base.Resources):
|
|
||||||
|
|
||||||
def __init__(self, session):
|
|
||||||
super(NeutronResources, self).__init__(session)
|
|
||||||
self.client = neutron_client.Client(session=session.keystone_session)
|
|
||||||
self.project_id = session.project_id
|
|
||||||
|
|
||||||
# This method is used for routers and interfaces removal
|
|
||||||
def list_routers(self):
|
|
||||||
return filter(
|
|
||||||
self._owned_resource,
|
|
||||||
self.client.list_routers(tenant_id=self.project_id)['routers'])
|
|
||||||
|
|
||||||
def _owned_resource(self, res):
|
|
||||||
# Only considering resources owned by project
|
|
||||||
# We try to filter directly in the client.list() commands, but some 3rd
|
|
||||||
# party Neutron plugins may ignore the "tenant_id=self.project_id"
|
|
||||||
# keyword filtering parameter. An extra check does not cost much and
|
|
||||||
# keeps us on the safe side.
|
|
||||||
return res['tenant_id'] == self.project_id
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronRouters(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return self.list_routers()
|
|
||||||
|
|
||||||
def delete(self, router):
|
|
||||||
"""Interfaces must be deleted first."""
|
|
||||||
super(NeutronRouters, self).delete(router)
|
|
||||||
# Remove router gateway prior to remove the router itself
|
|
||||||
self.client.remove_gateway_router(router['id'])
|
|
||||||
self.client.delete_router(router['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(router):
|
|
||||||
return u"router {} (id {})".format(router['name'], router['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronInterfaces(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
# Only considering "router_interface" ports
|
|
||||||
# (not gateways, neither unbound ports)
|
|
||||||
all_ports = [
|
|
||||||
port for port in self.client.list_ports(
|
|
||||||
tenant_id=self.project_id)['ports']
|
|
||||||
if port["device_owner"] in ("network:router_interface", "network:router_interface_distributed")
|
|
||||||
]
|
|
||||||
return filter(self._owned_resource, all_ports)
|
|
||||||
|
|
||||||
def delete(self, interface):
|
|
||||||
super(NeutronInterfaces, self).delete(interface)
|
|
||||||
self.client.remove_interface_router(interface['device_id'],
|
|
||||||
{'port_id': interface['id']})
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(interface):
|
|
||||||
return u"interface {} (id {})".format(interface['name'],
|
|
||||||
interface['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronPorts(NeutronResources):
|
|
||||||
|
|
||||||
# When created, unbound ports' device_owner are "". device_owner
|
|
||||||
# is of the form" compute:*" if it has been bound to some vm in
|
|
||||||
# the past.
|
|
||||||
def list(self):
|
|
||||||
all_ports = [
|
|
||||||
port for port in self.client.list_ports(
|
|
||||||
tenant_id=self.project_id)['ports']
|
|
||||||
if port["device_owner"] == ""
|
|
||||||
or port["device_owner"].startswith("compute:")
|
|
||||||
]
|
|
||||||
return filter(self._owned_resource, all_ports)
|
|
||||||
|
|
||||||
def delete(self, port):
|
|
||||||
super(NeutronPorts, self).delete(port)
|
|
||||||
self.client.delete_port(port['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(port):
|
|
||||||
return u"port {} (id {})".format(port['name'], port['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronNetworks(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource,
|
|
||||||
self.client.list_networks(
|
|
||||||
tenant_id=self.project_id)['networks'])
|
|
||||||
|
|
||||||
def delete(self, net):
|
|
||||||
"""Delete a Neutron network
|
|
||||||
|
|
||||||
Interfaces connected to the network must be deleted first.
|
|
||||||
Implying there must not be any VM on the network.
|
|
||||||
"""
|
|
||||||
super(NeutronNetworks, self).delete(net)
|
|
||||||
self.client.delete_network(net['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(net):
|
|
||||||
return u"network {} (id {})".format(net['name'], net['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronSecgroups(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
# filtering out default security group (cannot be removed)
|
|
||||||
def secgroup_filter(secgroup):
|
|
||||||
if secgroup['name'] == 'default':
|
|
||||||
return False
|
|
||||||
return self._owned_resource(secgroup)
|
|
||||||
|
|
||||||
try:
|
|
||||||
sgs = self.client.list_security_groups(
|
|
||||||
tenant_id=self.project_id)['security_groups']
|
|
||||||
return filter(secgroup_filter, sgs)
|
|
||||||
except neutronclient.common.exceptions.NeutronClientException as err:
|
|
||||||
if getattr(err, "status_code", None) == 404:
|
|
||||||
raise exceptions.ResourceNotEnabled
|
|
||||||
raise
|
|
||||||
|
|
||||||
def delete(self, secgroup):
|
|
||||||
"""VMs using the security group should be deleted first."""
|
|
||||||
super(NeutronSecgroups, self).delete(secgroup)
|
|
||||||
self.client.delete_security_group(secgroup['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(secgroup):
|
|
||||||
return u"security group {} (id {})".format(
|
|
||||||
secgroup['name'], secgroup['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronFloatingIps(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource,
|
|
||||||
self.client.list_floatingips(
|
|
||||||
tenant_id=self.project_id)['floatingips'])
|
|
||||||
|
|
||||||
def delete(self, floating_ip):
|
|
||||||
super(NeutronFloatingIps, self).delete(floating_ip)
|
|
||||||
self.client.delete_floatingip(floating_ip['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(floating_ip):
|
|
||||||
return u"floating ip {} (id {})".format(
|
|
||||||
floating_ip['floating_ip_address'], floating_ip['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronLbMembers(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource, self.client.list_members(
|
|
||||||
tenant_id=self.project_id)['members'])
|
|
||||||
|
|
||||||
def delete(self, member):
|
|
||||||
super(NeutronLbMembers, self).delete(member)
|
|
||||||
self.client.delete_member(member['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(member):
|
|
||||||
return u"lb-member {} (id {})".format(member['address'], member['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronLbPool(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource, self.client.list_pools(
|
|
||||||
tenant_id=self.project_id)['pools'])
|
|
||||||
|
|
||||||
def delete(self, pool):
|
|
||||||
super(NeutronLbPool, self).delete(pool)
|
|
||||||
self.client.delete_pool(pool['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(pool):
|
|
||||||
return u"lb-pool {} (id {})".format(pool['name'], pool['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronLbVip(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource, self.client.list_vips(
|
|
||||||
tenant_id=self.project_id)['vips'])
|
|
||||||
|
|
||||||
def delete(self, vip):
|
|
||||||
super(NeutronLbVip, self).delete(vip)
|
|
||||||
self.client.delete_vip(vip['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(vip):
|
|
||||||
return u"lb-vip {} (id {})".format(vip['name'], vip['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronLbHealthMonitor(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource, self.client.list_health_monitors(
|
|
||||||
tenant_id=self.project_id)['health_monitors'])
|
|
||||||
|
|
||||||
def delete(self, health_monitor):
|
|
||||||
super(NeutronLbHealthMonitor, self).delete(health_monitor)
|
|
||||||
self.client.delete_health_monitor(health_monitor['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(health_monitor):
|
|
||||||
return u"lb-health_monotor type {} (id {})".format(
|
|
||||||
health_monitor['type'], health_monitor['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronMeteringLabel(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource, self.client.list_metering_labels(
|
|
||||||
tenant_id=self.project_id)['metering_labels'])
|
|
||||||
|
|
||||||
def delete(self, metering_label):
|
|
||||||
super(NeutronMeteringLabel, self).delete(metering_label)
|
|
||||||
self.client.delete_metering_label(metering_label['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(metering_label):
|
|
||||||
return u"meter-label {} (id {})".format(
|
|
||||||
metering_label['name'], metering_label['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronFireWallPolicy(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource, self.client.list_firewall_policies(
|
|
||||||
tenant_id=self.project_id)['firewall_policies'])
|
|
||||||
|
|
||||||
def delete(self, firewall_policy):
|
|
||||||
super(NeutronFireWallPolicy, self).delete(firewall_policy)
|
|
||||||
self.client.delete_firewall_policy(firewall_policy['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(firewall_policy):
|
|
||||||
return u"Firewall policy {} (id {})".format(
|
|
||||||
firewall_policy['name'], firewall_policy['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronFireWallRule(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource, self.client.list_firewall_rules(
|
|
||||||
tenant_id=self.project_id)['firewall_rules'])
|
|
||||||
|
|
||||||
def delete(self, firewall_rule):
|
|
||||||
super(NeutronFireWallRule, self).delete(firewall_rule)
|
|
||||||
self.client.delete_firewall_rule(firewall_rule['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(firewall_rule):
|
|
||||||
return u"Firewall rule {} (id {})".format(
|
|
||||||
firewall_rule['name'], firewall_rule['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NeutronFireWall(NeutronResources):
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource, self.client.list_firewalls(
|
|
||||||
tenant_id=self.project_id)['firewalls'])
|
|
||||||
|
|
||||||
def delete(self, firewall):
|
|
||||||
super(NeutronFireWall, self).delete(firewall)
|
|
||||||
self.client.delete_firewall(firewall['id'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def resource_str(firewall):
|
|
||||||
return u"Firewall {} (id {})".format(firewall['name'], firewall['id'])
|
|
||||||
|
|
||||||
|
|
||||||
class NovaServers(base.Resources):
|
|
||||||
|
|
||||||
def __init__(self, session):
|
|
||||||
super(NovaServers, self).__init__(session)
|
|
||||||
self.client = nova_client.Client("2.1",
|
|
||||||
session=session.keystone_session)
|
|
||||||
self.project_id = session.project_id
|
|
||||||
|
|
||||||
"""Manage nova resources"""
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return self.client.servers.list()
|
|
||||||
|
|
||||||
def delete(self, server):
|
|
||||||
super(NovaServers, self).delete(server)
|
|
||||||
self.client.servers.delete(server)
|
|
||||||
|
|
||||||
def resource_str(self, server):
|
|
||||||
return u"server {} (id {})".format(server.name, server.id)
|
|
||||||
|
|
||||||
|
|
||||||
class GlanceImages(base.Resources):
|
|
||||||
|
|
||||||
def __init__(self, session):
|
|
||||||
self.client = glance_client.Client(
|
|
||||||
endpoint=session.get_endpoint("image"),
|
|
||||||
token=session.token, insecure=session.insecure)
|
|
||||||
self.project_id = session.project_id
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return filter(self._owned_resource, self.client.images.list(
|
|
||||||
owner=self.project_id))
|
|
||||||
|
|
||||||
def delete(self, image):
|
|
||||||
super(GlanceImages, self).delete(image)
|
|
||||||
self.client.images.delete(image.id)
|
|
||||||
|
|
||||||
def resource_str(self, image):
|
|
||||||
return u"image {} (id {})".format(image.name, image.id)
|
|
||||||
|
|
||||||
def _owned_resource(self, res):
|
|
||||||
# Only considering resources owned by project
|
|
||||||
return res.owner == self.project_id
|
|
||||||
|
|
||||||
|
|
||||||
class HeatStacks(base.Resources):
|
|
||||||
|
|
||||||
def __init__(self, session):
|
|
||||||
self.client = heat_client.Client(
|
|
||||||
"1",
|
|
||||||
endpoint=session.get_endpoint("orchestration"),
|
|
||||||
token=session.token, insecure=session.insecure)
|
|
||||||
self.project_id = session.project_id
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
return self.client.stacks.list()
|
|
||||||
|
|
||||||
def delete(self, stack):
|
|
||||||
super(HeatStacks, self).delete(stack)
|
|
||||||
if stack.stack_status == "DELETE_FAILED":
|
|
||||||
self.client.stacks.abandon(stack.id)
|
|
||||||
else:
|
|
||||||
self.client.stacks.delete(stack.id)
|
|
||||||
|
|
||||||
def resource_str(self, stack):
|
|
||||||
return u"stack {})".format(stack.id)
|
|
||||||
|
|
||||||
|
|
||||||
class CeilometerAlarms(base.Resources):
|
|
||||||
|
|
||||||
def __init__(self, session):
|
|
||||||
# Ceilometer Client needs a method that returns the token
|
|
||||||
def get_token():
|
|
||||||
return session.token
|
|
||||||
self.client = ceilometer_client.Client(
|
|
||||||
endpoint=session.get_endpoint("metering"),
|
|
||||||
endpoint_type=session.endpoint_type,
|
|
||||||
region_name=session.region_name,
|
|
||||||
token=get_token, insecure=session.insecure)
|
|
||||||
self.project_id = session.project_id
|
|
||||||
|
|
||||||
def list(self):
|
|
||||||
query = [{'field': 'project_id',
|
|
||||||
'op': 'eq',
|
|
||||||
'value': self.project_id}]
|
|
||||||
return self.client.alarms.list(q=query)
|
|
||||||
|
|
||||||
def delete(self, alarm):
|
|
||||||
super(CeilometerAlarms, self).delete(alarm)
|
|
||||||
self.client.alarms.delete(alarm.alarm_id)
|
|
||||||
|
|
||||||
def resource_str(self, alarm):
|
|
||||||
return u"alarm {}".format(alarm.name)
|
|
||||||
|
|
||||||
|
|
||||||
class KeystoneManager(object):
|
|
||||||
|
|
||||||
"""Manages Keystone queries."""
|
|
||||||
|
|
||||||
def __init__(self, username, password, project, auth_url, insecure,
|
|
||||||
**kwargs):
|
|
||||||
data = {
|
|
||||||
'username': username,
|
|
||||||
'password': password,
|
|
||||||
'project_name': project,
|
|
||||||
}
|
|
||||||
|
|
||||||
if kwargs['user_domain_name'] is not None:
|
|
||||||
if kwargs['project_domain_name'] is None:
|
|
||||||
kwargs['project_domain_name'] = 'Default'
|
|
||||||
data.update({
|
|
||||||
'domain_id': kwargs.get('domain_id'),
|
|
||||||
'project_domain_id': kwargs.get('project_domain_id'),
|
|
||||||
'project_domain_name': kwargs.get('project_domain_name'),
|
|
||||||
'user_domain_id': kwargs.get('user_domain_id'),
|
|
||||||
'user_domain_name': kwargs.get('user_domain_name')
|
|
||||||
})
|
|
||||||
|
|
||||||
self.auth = keystone_auth.Password(auth_url, **data)
|
|
||||||
session = keystone_session.Session(auth=self.auth, verify=(not insecure))
|
|
||||||
self.client = keystone_client.Client(session=session)
|
|
||||||
|
|
||||||
self.admin_role_id = None
|
|
||||||
self.tenant_info = None
|
|
||||||
self.admin_role_name = kwargs['admin_role_name']
|
|
||||||
self.user_id = self.auth.auth_ref.user_id
|
|
||||||
|
|
||||||
@property
|
|
||||||
def client_projects(self):
|
|
||||||
if self.client.version == "v2.0":
|
|
||||||
return self.client.tenants
|
|
||||||
return self.client.projects
|
|
||||||
|
|
||||||
def get_project_id(self, project_name_or_id=None):
|
|
||||||
"""Get a project by its id
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
* ID of current project if called without parameter,
|
|
||||||
* ID of project given as parameter if one is given.
|
|
||||||
"""
|
|
||||||
if project_name_or_id is None:
|
|
||||||
return self.auth.auth_ref.project_id
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.tenant_info = self.client_projects.get(project_name_or_id)
|
|
||||||
# If it doesn't raise an 404, project_name_or_id is
|
|
||||||
# already the project's id
|
|
||||||
project_id = project_name_or_id
|
|
||||||
except api_exceptions.NotFound:
|
|
||||||
try:
|
|
||||||
# Can raise api_exceptions.Forbidden:
|
|
||||||
tenants = self.client_projects.list()
|
|
||||||
project_id = filter(
|
|
||||||
lambda x: x.name == project_name_or_id, tenants)[0].id
|
|
||||||
except IndexError:
|
|
||||||
raise exceptions.NoSuchProject(project_name_or_id)
|
|
||||||
|
|
||||||
if not self.tenant_info:
|
|
||||||
self.tenant_info = self.client_projects.get(project_id)
|
|
||||||
return project_id
|
|
||||||
|
|
||||||
def enable_project(self, project_id):
|
|
||||||
logging.info("* Enabling project {}.".format(project_id))
|
|
||||||
self.tenant_info = self.client_projects.update(project_id, enabled=True)
|
|
||||||
|
|
||||||
def disable_project(self, project_id):
|
|
||||||
logging.info("* Disabling project {}.".format(project_id))
|
|
||||||
self.tenant_info = self.client_projects.update(project_id, enabled=False)
|
|
||||||
|
|
||||||
def get_admin_role_id(self):
|
|
||||||
if not self.admin_role_id:
|
|
||||||
roles = self.client.roles.list()
|
|
||||||
self.admin_role_id = filter(lambda x: x.name == self.admin_role_name, roles)[0].id
|
|
||||||
return self.admin_role_id
|
|
||||||
|
|
||||||
def become_project_admin(self, project_id):
|
|
||||||
user_id = self.user_id
|
|
||||||
admin_role_id = self.get_admin_role_id()
|
|
||||||
logging.info("* Granting role admin to user {} on project {}.".format(
|
|
||||||
user_id, project_id))
|
|
||||||
if self.client.version == "v2.0":
|
|
||||||
return self.client.roles.add_user_role(user_id, admin_role_id,
|
|
||||||
project_id)
|
|
||||||
else:
|
|
||||||
return self.client.roles.grant(role=admin_role_id, user=user_id,
|
|
||||||
project=project_id)
|
|
||||||
|
|
||||||
def undo_become_project_admin(self, project_id):
|
|
||||||
user_id = self.user_id
|
|
||||||
admin_role_id = self.get_admin_role_id()
|
|
||||||
logging.info("* Removing role admin to user {} on project {}.".format(
|
|
||||||
user_id, project_id))
|
|
||||||
if self.client.version == "v2.0":
|
|
||||||
return self.client.roles.remove_user_role(user_id,
|
|
||||||
admin_role_id,
|
|
||||||
project_id)
|
|
||||||
else:
|
|
||||||
return self.client.roles.revoke(role=admin_role_id,
|
|
||||||
user=user_id,
|
|
||||||
project=project_id)
|
|
||||||
|
|
||||||
def delete_project(self, project_id):
|
|
||||||
logging.info("* Deleting project {}.".format(project_id))
|
|
||||||
self.client_projects.delete(project_id)
|
|
||||||
|
|
||||||
|
|
||||||
def perform_on_project(admin_name, password, project, auth_url,
|
|
||||||
endpoint_type='publicURL', action='dump',
|
|
||||||
insecure=False, **kwargs):
|
|
||||||
"""Perform provided action on all resources of project.
|
|
||||||
|
|
||||||
action can be: 'purge' or 'dump'
|
|
||||||
"""
|
|
||||||
session = base.Session(admin_name, password, project, auth_url,
|
|
||||||
endpoint_type, insecure, **kwargs)
|
|
||||||
error = None
|
|
||||||
for rc in constants.RESOURCES_CLASSES:
|
|
||||||
try:
|
|
||||||
resources = globals()[rc](session)
|
|
||||||
res_actions = {'purge': resources.purge,
|
|
||||||
'dump': resources.dump}
|
|
||||||
res_actions[action]()
|
|
||||||
except (exceptions.EndpointNotFound,
|
|
||||||
api_exceptions.EndpointNotFound,
|
|
||||||
neutronclient.common.exceptions.EndpointNotFound,
|
|
||||||
cinderclient.exceptions.EndpointNotFound,
|
|
||||||
novaclient.exceptions.EndpointNotFound,
|
|
||||||
heatclient.openstack.common.apiclient.exceptions.EndpointNotFound,
|
|
||||||
exceptions.ResourceNotEnabled):
|
|
||||||
# If service is not in Keystone's services catalog, ignoring it
|
|
||||||
pass
|
|
||||||
except requests.exceptions.MissingSchema as e:
|
|
||||||
logging.warning(
|
|
||||||
'Some resources may not have been deleted, "{!s}" is '
|
|
||||||
'improperly configured and returned: {!r}\n'.format(rc, e))
|
|
||||||
except (ceilometerclient.exc.InvalidEndpoint, glanceclient.exc.InvalidEndpoint) as e:
|
|
||||||
logging.warning(
|
|
||||||
"Unable to connect to {} endpoint : {}".format(rc, e.message))
|
|
||||||
error = exceptions.InvalidEndpoint(rc)
|
|
||||||
except (neutronclient.common.exceptions.NeutronClientException):
|
|
||||||
# If service is not configured, ignoring it
|
|
||||||
pass
|
|
||||||
if error:
|
|
||||||
raise error
|
|
||||||
|
|
||||||
|
|
||||||
# From Russell Heilling
|
|
||||||
# http://stackoverflow.com/questions/10551117/setting-options-from-environment-variables-when-using-argparse
|
|
||||||
class EnvDefault(argparse.Action):
|
|
||||||
|
|
||||||
def __init__(self, envvar, required=True, default=None, **kwargs):
|
|
||||||
# Overriding default with environment variable if available
|
|
||||||
if envvar in os.environ:
|
|
||||||
default = os.environ[envvar]
|
|
||||||
if required and default:
|
|
||||||
required = False
|
|
||||||
super(EnvDefault, self).__init__(default=default, required=required,
|
|
||||||
**kwargs)
|
|
||||||
|
|
||||||
def __call__(self, parser, namespace, values, option_string=None):
|
|
||||||
setattr(namespace, self.dest, values)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_args():
|
|
||||||
desc = "Purge resources from an Openstack project."
|
|
||||||
parser = argparse.ArgumentParser(description=desc)
|
|
||||||
parser.add_argument("--verbose", action="store_true",
|
|
||||||
help="Makes output verbose")
|
|
||||||
parser.add_argument("--dry-run", action="store_true",
|
|
||||||
help="List project's resources")
|
|
||||||
parser.add_argument("--dont-delete-project", action="store_true",
|
|
||||||
help="Executes cleanup script without removing the project. "
|
|
||||||
"Warning: all project resources will still be deleted.")
|
|
||||||
parser.add_argument("--region-name", action=EnvDefault, required=False,
|
|
||||||
envvar='OS_REGION_NAME', default=None,
|
|
||||||
help="Region to use. Defaults to env[OS_REGION_NAME] "
|
|
||||||
"or None")
|
|
||||||
parser.add_argument("--endpoint-type", action=EnvDefault,
|
|
||||||
envvar='OS_ENDPOINT_TYPE', default="publicURL",
|
|
||||||
help="Endpoint type to use. Defaults to "
|
|
||||||
"env[OS_ENDPOINT_TYPE] or publicURL")
|
|
||||||
parser.add_argument("--username", action=EnvDefault,
|
|
||||||
envvar='OS_USERNAME', required=True,
|
|
||||||
help="If --own-project is set : a user name with access to the "
|
|
||||||
"project being purged. If --cleanup-project is set : "
|
|
||||||
"a user name with admin role in project specified in --admin-project. "
|
|
||||||
"Defaults to env[OS_USERNAME]")
|
|
||||||
parser.add_argument("--password", action=EnvDefault,
|
|
||||||
envvar='OS_PASSWORD', required=True,
|
|
||||||
help="The user's password. Defaults "
|
|
||||||
"to env[OS_PASSWORD].")
|
|
||||||
parser.add_argument("--admin-project", action=EnvDefault,
|
|
||||||
envvar='OS_TENANT_NAME', required=False,
|
|
||||||
help="Project name used for authentication. This project "
|
|
||||||
"will be purged if --own-project is set. "
|
|
||||||
"Defaults to env[OS_TENANT_NAME].")
|
|
||||||
parser.add_argument("--admin-role-name", required=False, default="admin",
|
|
||||||
help="Name of admin role. Defaults to 'admin'.")
|
|
||||||
parser.add_argument("--auth-url", action=EnvDefault,
|
|
||||||
envvar='OS_AUTH_URL', required=True,
|
|
||||||
help="Authentication URL. Defaults to "
|
|
||||||
"env[OS_AUTH_URL].")
|
|
||||||
parser.add_argument("--user-domain-id", action=EnvDefault,
|
|
||||||
envvar='OS_USER_DOMAIN_ID', required=False,
|
|
||||||
help="User Domain ID. Defaults to "
|
|
||||||
"env[OS_USER_DOMAIN_ID].")
|
|
||||||
parser.add_argument("--user-domain-name", action=EnvDefault,
|
|
||||||
envvar='OS_USER_DOMAIN_NAME', required=False,
|
|
||||||
help="User Domain ID. Defaults to "
|
|
||||||
"env[OS_USER_DOMAIN_NAME].")
|
|
||||||
parser.add_argument("--project-name", action=EnvDefault,
|
|
||||||
envvar='OS_PROJECT_NAME', required=False,
|
|
||||||
help="Project Name. Defaults to "
|
|
||||||
"env[OS_PROJECT_NAME].")
|
|
||||||
parser.add_argument("--project-domain-id", action=EnvDefault,
|
|
||||||
envvar='OS_PROJECT_DOMAIN_ID', required=False,
|
|
||||||
help="Project Domain ID. Defaults to "
|
|
||||||
"env[OS_PROJECT_DOMAIN_ID].")
|
|
||||||
parser.add_argument("--project-domain-name", action=EnvDefault,
|
|
||||||
envvar='OS_PROJECT_DOMAIN_NAME', required=False,
|
|
||||||
help="Project Domain NAME. Defaults to "
|
|
||||||
"env[OS_PROJECT_DOMAIN_NAME].")
|
|
||||||
parser.add_argument("--cleanup-project", required=False, default=None,
|
|
||||||
help="ID or Name of project to purge. Not required "
|
|
||||||
"if --own-project has been set. Using --cleanup-project "
|
|
||||||
"requires to authenticate with admin credentials.")
|
|
||||||
parser.add_argument("--own-project", action="store_true",
|
|
||||||
help="Delete resources of the project used to "
|
|
||||||
"authenticate. Useful if you don't have the "
|
|
||||||
"admin credentials of the platform.")
|
|
||||||
parser.add_argument("--insecure", action="store_true",
|
|
||||||
help="Explicitly allow all OpenStack clients to perform "
|
|
||||||
"insecure SSL (https) requests. The server's "
|
|
||||||
"certificate will not be verified against any "
|
|
||||||
"certificate authorities. This option should be "
|
|
||||||
"used with caution.")
|
|
||||||
|
|
||||||
args = parser.parse_args()
|
|
||||||
if not (args.cleanup_project or args.own_project):
|
|
||||||
parser.error('Either --cleanup-project '
|
|
||||||
'or --own-project has to be set')
|
|
||||||
if args.cleanup_project and args.own_project:
|
|
||||||
parser.error('Both --cleanup-project '
|
|
||||||
'and --own-project can not be set')
|
|
||||||
if not (args.admin_project or args.project_name):
|
|
||||||
parser.error('--admin-project or --project-name is required')
|
|
||||||
return args
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
args = parse_args()
|
|
||||||
|
|
||||||
if args.verbose:
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
else:
|
|
||||||
# Set default log level to Warning
|
|
||||||
logging.basicConfig(level=logging.WARNING)
|
|
||||||
|
|
||||||
data = {
|
|
||||||
'region_name': args.region_name,
|
|
||||||
'user_domain_id': args.user_domain_id,
|
|
||||||
'project_domain_id': args.project_domain_id,
|
|
||||||
'project_domain_name': args.project_domain_name,
|
|
||||||
'user_domain_name': args.user_domain_name,
|
|
||||||
'admin_role_name': args.admin_role_name
|
|
||||||
}
|
|
||||||
project = args.admin_project if args.admin_project else args.project_name
|
|
||||||
|
|
||||||
try:
|
|
||||||
keystone_manager = KeystoneManager(args.username, args.password,
|
|
||||||
project, args.auth_url,
|
|
||||||
args.insecure, **data)
|
|
||||||
except api_exceptions.Unauthorized as exc:
|
|
||||||
print("Authentication failed: {}".format(str(exc)))
|
|
||||||
sys.exit(constants.AUTHENTICATION_FAILED_ERROR_CODE)
|
|
||||||
|
|
||||||
remove_admin_role_after_purge = False
|
|
||||||
disable_project_after_purge = False
|
|
||||||
try:
|
|
||||||
cleanup_project_id = keystone_manager.get_project_id(
|
|
||||||
args.cleanup_project)
|
|
||||||
if not args.own_project:
|
|
||||||
try:
|
|
||||||
keystone_manager.become_project_admin(cleanup_project_id)
|
|
||||||
except api_exceptions.Conflict:
|
|
||||||
# user was already admin on the target project.
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
remove_admin_role_after_purge = True
|
|
||||||
|
|
||||||
# If the project was enabled before the purge, do not disable it after the purge
|
|
||||||
disable_project_after_purge = not keystone_manager.tenant_info.enabled
|
|
||||||
if disable_project_after_purge:
|
|
||||||
# The project is currently disabled so we need to enable it
|
|
||||||
# in order to delete resources of the project
|
|
||||||
keystone_manager.enable_project(cleanup_project_id)
|
|
||||||
|
|
||||||
except api_exceptions.Forbidden as exc:
|
|
||||||
print("Not authorized: {}".format(str(exc)))
|
|
||||||
sys.exit(constants.NOT_AUTHORIZED_ERROR_CODE)
|
|
||||||
except exceptions.NoSuchProject as exc:
|
|
||||||
print("Project {} doesn't exist".format(str(exc)))
|
|
||||||
sys.exit(constants.NO_SUCH_PROJECT_ERROR_CODE)
|
|
||||||
|
|
||||||
# Proper cleanup
|
|
||||||
try:
|
|
||||||
action = "dump" if args.dry_run else "purge"
|
|
||||||
perform_on_project(args.username, args.password, cleanup_project_id,
|
|
||||||
args.auth_url, args.endpoint_type, action,
|
|
||||||
args.insecure, **data)
|
|
||||||
except requests.exceptions.ConnectionError as exc:
|
|
||||||
print("Connection error: {}".format(str(exc)))
|
|
||||||
sys.exit(constants.CONNECTION_ERROR_CODE)
|
|
||||||
except (exceptions.DeletionFailed, exceptions.InvalidEndpoint) as exc:
|
|
||||||
print("Deletion of {} failed".format(str(exc)))
|
|
||||||
print("*Warning* Some resources may not have been cleaned up")
|
|
||||||
sys.exit(constants.DELETION_FAILED_ERROR_CODE)
|
|
||||||
|
|
||||||
if (not args.dry_run) and (not args.dont_delete_project) and (not args.own_project):
|
|
||||||
keystone_manager.delete_project(cleanup_project_id)
|
|
||||||
else:
|
|
||||||
# Project is not deleted, we may want to disable the project
|
|
||||||
# this must happen before we remove the admin role
|
|
||||||
if disable_project_after_purge:
|
|
||||||
keystone_manager.disable_project(cleanup_project_id)
|
|
||||||
# We may also want to remove ourself from the purged project
|
|
||||||
if remove_admin_role_after_purge:
|
|
||||||
keystone_manager.undo_become_project_admin(cleanup_project_id)
|
|
||||||
sys.exit(0)
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
@ -1,66 +0,0 @@
|
|||||||
# This software is released under the MIT License.
|
|
||||||
#
|
|
||||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
# of this software and associated documentation files (the "Software"), to deal
|
|
||||||
# in the Software without restriction, including without limitation the rights
|
|
||||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
# copies of the Software, and to permit persons to whom the Software is
|
|
||||||
# furnished to do so, subject to the following conditions:
|
|
||||||
#
|
|
||||||
# The above copyright notice and this permission notice shall be included in all
|
|
||||||
# copies or substantial portions of the Software.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
|
|
||||||
# Available resources classes
|
|
||||||
|
|
||||||
# The order of the Openstack resources in the subsequent list
|
|
||||||
# corresponds to the order in which ospurge will delete the
|
|
||||||
# resources. This order takes into account inter-resources
|
|
||||||
# dependencies, and tries to minimize the overall time duration of the
|
|
||||||
# purge operation.
|
|
||||||
|
|
||||||
RESOURCES_CLASSES = [
|
|
||||||
'CinderSnapshots',
|
|
||||||
'CinderBackups',
|
|
||||||
'NeutronFireWall',
|
|
||||||
'NeutronFireWallPolicy',
|
|
||||||
'NeutronFireWallRule',
|
|
||||||
'NeutronLbMembers',
|
|
||||||
'NeutronLbVip',
|
|
||||||
'NeutronLbHealthMonitor',
|
|
||||||
'NeutronLbPool',
|
|
||||||
'NovaServers',
|
|
||||||
'NeutronFloatingIps',
|
|
||||||
'NeutronMeteringLabel',
|
|
||||||
'NeutronInterfaces',
|
|
||||||
'NeutronRouters',
|
|
||||||
'NeutronPorts',
|
|
||||||
'NeutronNetworks',
|
|
||||||
'NeutronSecgroups',
|
|
||||||
'GlanceImages',
|
|
||||||
'SwiftObjects',
|
|
||||||
'SwiftContainers',
|
|
||||||
'CinderVolumes',
|
|
||||||
'CeilometerAlarms',
|
|
||||||
'HeatStacks'
|
|
||||||
]
|
|
||||||
|
|
||||||
# Error codes
|
|
||||||
|
|
||||||
NO_SUCH_PROJECT_ERROR_CODE = 2
|
|
||||||
AUTHENTICATION_FAILED_ERROR_CODE = 3
|
|
||||||
DELETION_FAILED_ERROR_CODE = 4
|
|
||||||
CONNECTION_ERROR_CODE = 5
|
|
||||||
NOT_AUTHORIZED_ERROR_CODE = 6
|
|
||||||
|
|
||||||
# Constants
|
|
||||||
|
|
||||||
RETRIES = 10 # Retry a delete operation 10 times before exiting
|
|
||||||
TIMEOUT = 5 # 5 seconds timeout between retries
|
|
@ -1,39 +1,19 @@
|
|||||||
# This software is released under the MIT License.
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
#
|
#
|
||||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
# of this software and associated documentation files (the "Software"), to deal
|
|
||||||
# in the Software without restriction, including without limitation the rights
|
|
||||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
# copies of the Software, and to permit persons to whom the Software is
|
|
||||||
# furnished to do so, subject to the following conditions:
|
|
||||||
#
|
#
|
||||||
# The above copyright notice and this permission notice shall be included in all
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
# copies or substantial portions of the Software.
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
#
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
# License for the specific language governing permissions and limitations
|
||||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
# under the License.
|
||||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
|
|
||||||
|
|
||||||
class ResourceNotEnabled(Exception):
|
class OSProjectNotFound(Exception):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
class EndpointNotFound(Exception):
|
class TimeoutError(Exception):
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class InvalidEndpoint(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class NoSuchProject(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class DeletionFailed(Exception):
|
|
||||||
pass
|
pass
|
||||||
|
241
ospurge/main.py
Normal file
241
ospurge/main.py
Normal file
@ -0,0 +1,241 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import argparse
|
||||||
|
import concurrent.futures
|
||||||
|
import logging
|
||||||
|
import operator
|
||||||
|
import sys
|
||||||
|
import threading
|
||||||
|
import typing
|
||||||
|
|
||||||
|
import os_client_config
|
||||||
|
import shade
|
||||||
|
|
||||||
|
from ospurge import exceptions
|
||||||
|
from ospurge.resources.base import ServiceResource
|
||||||
|
from ospurge import utils
|
||||||
|
|
||||||
|
if typing.TYPE_CHECKING: # pragma: no cover
|
||||||
|
from typing import Optional # noqa: F401
|
||||||
|
|
||||||
|
|
||||||
|
def configure_logging(verbose: bool) -> None:
|
||||||
|
log_level = logging.INFO if verbose else logging.WARNING
|
||||||
|
logging.basicConfig(
|
||||||
|
format='%(levelname)s:%(name)s:%(asctime)s:%(message)s',
|
||||||
|
level=log_level
|
||||||
|
)
|
||||||
|
logging.getLogger(
|
||||||
|
'requests.packages.urllib3.connectionpool').setLevel(logging.WARNING)
|
||||||
|
|
||||||
|
|
||||||
|
def create_argument_parser() -> argparse.ArgumentParser:
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Purge resources from an Openstack project."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--verbose", action="store_true",
|
||||||
|
help="Make output verbose"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--dry-run", action="store_true",
|
||||||
|
help="List project's resources"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--delete-shared-resources", action="store_true",
|
||||||
|
help="Whether to delete shared resources (public images and external "
|
||||||
|
"networks)"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--admin-role-name", default="admin",
|
||||||
|
help="Name of admin role. Defaults to 'admin'. This role will be "
|
||||||
|
"temporarily granted on the project to purge to the "
|
||||||
|
"authenticated user."
|
||||||
|
)
|
||||||
|
|
||||||
|
group = parser.add_mutually_exclusive_group(required=True)
|
||||||
|
group.add_argument(
|
||||||
|
"--purge-project", metavar="ID_OR_NAME",
|
||||||
|
help="ID or Name of project to purge. This option requires "
|
||||||
|
"to authenticate with admin credentials."
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--purge-own-project", action="store_true",
|
||||||
|
help="Purge resources of the project used to authenticate. Useful "
|
||||||
|
"if you don't have the admin credentials of the cloud."
|
||||||
|
)
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
class CredentialsManager(object):
|
||||||
|
def __init__(self, options: argparse.Namespace) -> None:
|
||||||
|
self.options = options
|
||||||
|
|
||||||
|
self.revoke_role_after_purge = False
|
||||||
|
self.disable_project_after_purge = False
|
||||||
|
|
||||||
|
self.cloud = None # type: Optional[shade.OpenStackCloud]
|
||||||
|
self.operator_cloud = None # type: Optional[shade.OperatorCloud]
|
||||||
|
|
||||||
|
if options.purge_own_project:
|
||||||
|
self.cloud = shade.openstack_cloud(argparse=options)
|
||||||
|
self.user_id = self.cloud.keystone_session.get_user_id()
|
||||||
|
self.project_id = self.cloud.keystone_session.get_project_id()
|
||||||
|
else:
|
||||||
|
self.operator_cloud = shade.operator_cloud(argparse=options)
|
||||||
|
self.user_id = self.operator_cloud.keystone_session.get_user_id()
|
||||||
|
|
||||||
|
project = self.operator_cloud.get_project(options.purge_project)
|
||||||
|
if not project:
|
||||||
|
raise exceptions.OSProjectNotFound(
|
||||||
|
"Unable to find project '{}'".format(options.purge_project)
|
||||||
|
)
|
||||||
|
self.project_id = project['id']
|
||||||
|
|
||||||
|
# If project is not enabled, we must disable it after purge.
|
||||||
|
self.disable_project_after_purge = not project.enabled
|
||||||
|
|
||||||
|
# Reuse the information passed to get the `OperatorCloud` but
|
||||||
|
# change the project. This way we bind/re-scope to the project
|
||||||
|
# we want to purge, not the project we authenticated to.
|
||||||
|
self.cloud = shade.openstack_cloud(
|
||||||
|
**utils.replace_project_info(
|
||||||
|
self.operator_cloud.cloud_config.config,
|
||||||
|
self.project_id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
auth_args = self.cloud.cloud_config.get_auth_args()
|
||||||
|
logging.warning(
|
||||||
|
"Going to list and/or delete resources from project '%s'",
|
||||||
|
options.purge_project or auth_args.get('project_name')
|
||||||
|
or auth_args.get('project_id')
|
||||||
|
)
|
||||||
|
|
||||||
|
def ensure_role_on_project(self) -> None:
|
||||||
|
if self.operator_cloud and self.operator_cloud.grant_role(
|
||||||
|
self.options.admin_role_name,
|
||||||
|
project=self.options.purge_project, user=self.user_id
|
||||||
|
):
|
||||||
|
logging.warning(
|
||||||
|
"Role 'Member' granted to user '%s' on project '%s'",
|
||||||
|
self.user_id, self.options.purge_project
|
||||||
|
)
|
||||||
|
self.revoke_role_after_purge = True
|
||||||
|
|
||||||
|
def revoke_role_on_project(self) -> None:
|
||||||
|
self.operator_cloud.revoke_role(
|
||||||
|
self.options.admin_role_name, user=self.user_id,
|
||||||
|
project=self.options.purge_project)
|
||||||
|
logging.warning(
|
||||||
|
"Role 'Member' revoked from user '%s' on project '%s'",
|
||||||
|
self.user_id, self.options.purge_project
|
||||||
|
)
|
||||||
|
|
||||||
|
def ensure_enabled_project(self) -> None:
|
||||||
|
if self.operator_cloud and self.disable_project_after_purge:
|
||||||
|
self.operator_cloud.update_project(self.project_id, enabled=True)
|
||||||
|
logging.warning("Project '%s' was disabled before purge and it is "
|
||||||
|
"now enabled", self.options.purge_project)
|
||||||
|
|
||||||
|
def disable_project(self) -> None:
|
||||||
|
self.operator_cloud.update_project(self.project_id, enabled=False)
|
||||||
|
logging.warning("Project '%s' was disabled before purge and it is "
|
||||||
|
"now also disabled", self.options.purge_project)
|
||||||
|
|
||||||
|
|
||||||
|
@utils.monkeypatch_oscc_logging_warning
|
||||||
|
def runner(
|
||||||
|
resource_mngr: ServiceResource, options: argparse.Namespace,
|
||||||
|
exit: threading.Event
|
||||||
|
) -> None:
|
||||||
|
try:
|
||||||
|
|
||||||
|
if not options.dry_run:
|
||||||
|
resource_mngr.wait_for_check_prerequisite(exit)
|
||||||
|
|
||||||
|
for resource in resource_mngr.list():
|
||||||
|
# No need to continue if requested to exit.
|
||||||
|
if exit.is_set():
|
||||||
|
return
|
||||||
|
|
||||||
|
if resource_mngr.should_delete(resource):
|
||||||
|
logging.info("Going to delete %s",
|
||||||
|
resource_mngr.to_str(resource))
|
||||||
|
|
||||||
|
if options.dry_run:
|
||||||
|
continue
|
||||||
|
|
||||||
|
utils.call_and_ignore_notfound(resource_mngr.delete, resource)
|
||||||
|
|
||||||
|
except Exception as exc:
|
||||||
|
log = logging.error
|
||||||
|
recoverable = False
|
||||||
|
if hasattr(exc, 'inner_exception'):
|
||||||
|
# inner_exception is a tuple (type, value, traceback)
|
||||||
|
# mypy complains: "Exception" has no attribute "inner_exception"
|
||||||
|
exc_info = exc.inner_exception # type: ignore
|
||||||
|
if exc_info[0].__name__.lower().endswith('endpointnotfound'):
|
||||||
|
log = logging.info
|
||||||
|
recoverable = True
|
||||||
|
log("Can't deal with %s: %r", resource_mngr.__class__.__name__, exc)
|
||||||
|
if not recoverable:
|
||||||
|
exit.set()
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
parser = create_argument_parser()
|
||||||
|
|
||||||
|
cloud_config = os_client_config.OpenStackConfig()
|
||||||
|
cloud_config.register_argparse_arguments(parser, sys.argv)
|
||||||
|
|
||||||
|
options = parser.parse_args()
|
||||||
|
configure_logging(options.verbose)
|
||||||
|
|
||||||
|
creds_manager = CredentialsManager(options=options)
|
||||||
|
creds_manager.ensure_enabled_project()
|
||||||
|
creds_manager.ensure_role_on_project()
|
||||||
|
|
||||||
|
resource_managers = sorted(
|
||||||
|
[cls(creds_manager) for cls in utils.get_all_resource_classes()],
|
||||||
|
key=operator.methodcaller('order')
|
||||||
|
)
|
||||||
|
|
||||||
|
# This is an `Event` used to signal whether one of the threads encountered
|
||||||
|
# an unrecoverable error, at which point all threads should exit because
|
||||||
|
# otherwise there's a chance the cleanup process never finishes.
|
||||||
|
exit = threading.Event()
|
||||||
|
|
||||||
|
# Dummy function to work around `ThreadPoolExecutor.map()` not accepting
|
||||||
|
# a callable with arguments.
|
||||||
|
def partial_runner(resource_manager: ServiceResource) -> None:
|
||||||
|
runner(resource_manager, options=options,
|
||||||
|
exit=exit) # pragma: no cover
|
||||||
|
|
||||||
|
try:
|
||||||
|
with concurrent.futures.ThreadPoolExecutor(8) as executor:
|
||||||
|
executor.map(partial_runner, resource_managers)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
exit.set()
|
||||||
|
|
||||||
|
if creds_manager.revoke_role_after_purge:
|
||||||
|
creds_manager.revoke_role_on_project()
|
||||||
|
|
||||||
|
if creds_manager.disable_project_after_purge:
|
||||||
|
creds_manager.disable_project()
|
||||||
|
|
||||||
|
sys.exit(int(exit.is_set()))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__": # pragma: no cover
|
||||||
|
main()
|
0
ospurge/resources/__init__.py
Normal file
0
ospurge/resources/__init__.py
Normal file
148
ospurge/resources/base.py
Normal file
148
ospurge/resources/base.py
Normal file
@ -0,0 +1,148 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import abc
|
||||||
|
import collections
|
||||||
|
import inspect
|
||||||
|
import logging
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from typing import Any
|
||||||
|
from typing import Dict
|
||||||
|
from typing import Iterable
|
||||||
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
|
from ospurge import exceptions
|
||||||
|
|
||||||
|
if TYPE_CHECKING: # pragma: no cover
|
||||||
|
import argparse # noqa: F401
|
||||||
|
from ospurge.main import CredentialsManager # noqa: F401
|
||||||
|
import shade # noqa: F401
|
||||||
|
from typing import Optional # noqa: F401
|
||||||
|
|
||||||
|
|
||||||
|
class MatchSignaturesMeta(type):
|
||||||
|
def __init__(self, clsname, bases, clsdict):
|
||||||
|
super().__init__(clsname, bases, clsdict)
|
||||||
|
sup = super(self, self) # type: ignore # See python/mypy #857
|
||||||
|
for name, value in clsdict.items():
|
||||||
|
if name.startswith('_') or not callable(value):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Get the previous definition (if any) and compare the signatures
|
||||||
|
prev_dfn = getattr(sup, name, None)
|
||||||
|
if prev_dfn:
|
||||||
|
prev_sig = inspect.signature(prev_dfn)
|
||||||
|
val_sig = inspect.signature(value)
|
||||||
|
if prev_sig != val_sig:
|
||||||
|
logging.warning('Signature mismatch in %s. %s != %s',
|
||||||
|
value.__qualname__, prev_sig, val_sig)
|
||||||
|
|
||||||
|
|
||||||
|
class OrderedMeta(type):
|
||||||
|
def __new__(cls, clsname, bases, clsdict):
|
||||||
|
ordered_methods = cls.ordered_methods
|
||||||
|
allowed_next_methods = list(ordered_methods)
|
||||||
|
for name, value in clsdict.items():
|
||||||
|
if name not in ordered_methods:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if name not in allowed_next_methods:
|
||||||
|
logging.warning(
|
||||||
|
"Method %s not defined at the correct location. Methods "
|
||||||
|
"in class %s must be defined in the following order %r",
|
||||||
|
value.__qualname__, clsname, ordered_methods
|
||||||
|
)
|
||||||
|
continue # pragma: no cover
|
||||||
|
|
||||||
|
_slice = slice(allowed_next_methods.index(name) + 1, None)
|
||||||
|
allowed_next_methods = allowed_next_methods[_slice]
|
||||||
|
|
||||||
|
# Cast to dict is required. We can't pass an OrderedDict here.
|
||||||
|
return super().__new__(cls, clsname, bases, dict(clsdict))
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def __prepare__(cls, clsname, bases):
|
||||||
|
return collections.OrderedDict()
|
||||||
|
|
||||||
|
|
||||||
|
class CodingStyleMixin(OrderedMeta, MatchSignaturesMeta, abc.ABCMeta):
|
||||||
|
ordered_methods = ['order', 'check_prerequisite', 'list', 'should_delete',
|
||||||
|
'delete', 'to_string']
|
||||||
|
|
||||||
|
|
||||||
|
class BaseServiceResource(object):
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.cloud = None # type: Optional[shade.OpenStackCloud]
|
||||||
|
self.cleanup_project_id = None # type: Optional[str]
|
||||||
|
self.options = None # type: Optional[argparse.Namespace]
|
||||||
|
|
||||||
|
|
||||||
|
class ServiceResource(BaseServiceResource, metaclass=CodingStyleMixin):
|
||||||
|
ORDER = None # type: int
|
||||||
|
|
||||||
|
def __init__(self, creds_manager: 'CredentialsManager') -> None:
|
||||||
|
if self.ORDER is None:
|
||||||
|
raise ValueError(
|
||||||
|
'Class {}.{} must override the "ORDER" class attribute'.format(
|
||||||
|
self.__module__, self.__class__.__name__) # type: ignore
|
||||||
|
)
|
||||||
|
|
||||||
|
self.cloud = creds_manager.cloud
|
||||||
|
self.options = creds_manager.options
|
||||||
|
self.cleanup_project_id = creds_manager.project_id
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def order(cls) -> int:
|
||||||
|
return cls.ORDER
|
||||||
|
|
||||||
|
def check_prerequisite(self) -> bool:
|
||||||
|
return True
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def should_delete(self, resource: Dict[str, Any]) -> bool:
|
||||||
|
project_id = resource.get('project_id', resource.get('tenant_id'))
|
||||||
|
if project_id:
|
||||||
|
return project_id == self.cleanup_project_id
|
||||||
|
else:
|
||||||
|
logging.warning("Can't determine owner of resource %s", resource)
|
||||||
|
return True
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
@abc.abstractmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def wait_for_check_prerequisite(self, exit: threading.Event) -> None:
|
||||||
|
timeout = time.time() + 120
|
||||||
|
sleep = 2
|
||||||
|
while time.time() < timeout:
|
||||||
|
if exit.is_set():
|
||||||
|
raise RuntimeError(
|
||||||
|
"Resource manager exited because it was interrupted or "
|
||||||
|
"another resource manager failed"
|
||||||
|
)
|
||||||
|
if self.check_prerequisite():
|
||||||
|
break
|
||||||
|
logging.info("Waiting for check_prerequisite() in %s",
|
||||||
|
self.__class__.__name__)
|
||||||
|
time.sleep(sleep)
|
||||||
|
sleep = min(sleep * 2, 8)
|
||||||
|
else:
|
||||||
|
raise exceptions.TimeoutError(
|
||||||
|
"Timeout exceeded waiting for check_prerequisite()")
|
69
ospurge/resources/cinder.py
Normal file
69
ospurge/resources/cinder.py
Normal file
@ -0,0 +1,69 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
from typing import Any
|
||||||
|
from typing import Dict
|
||||||
|
from typing import Iterable
|
||||||
|
|
||||||
|
from ospurge.resources import base
|
||||||
|
|
||||||
|
|
||||||
|
class Backups(base.ServiceResource):
|
||||||
|
ORDER = 33
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
return self.cloud.list_volume_backups()
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_volume_backup(resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Volume Backup (id='{}', name='{}'".format(
|
||||||
|
resource['id'], resource['name'])
|
||||||
|
|
||||||
|
|
||||||
|
class Snapshots(base.ServiceResource):
|
||||||
|
ORDER = 36
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
return self.cloud.list_volume_snapshots()
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_volume_snapshot(resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Volume Snapshot (id='{}', name='{}')".format(
|
||||||
|
resource['id'], resource['name'])
|
||||||
|
|
||||||
|
|
||||||
|
class Volumes(base.ServiceResource):
|
||||||
|
ORDER = 65
|
||||||
|
|
||||||
|
def check_prerequisite(self) -> bool:
|
||||||
|
return (self.cloud.list_volume_snapshots() == [] and
|
||||||
|
self.cloud.list_servers() == [])
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
return self.cloud.list_volumes()
|
||||||
|
|
||||||
|
def should_delete(self, resource: Dict[str, Any]) -> bool:
|
||||||
|
attr = 'os-vol-tenant-attr:tenant_id'
|
||||||
|
return resource[attr] == self.cleanup_project_id
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_volume(resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Volume (id='{}', name='{}')".format(
|
||||||
|
resource['id'], resource['name'])
|
53
ospurge/resources/glance.py
Normal file
53
ospurge/resources/glance.py
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
from typing import Any
|
||||||
|
from typing import Dict
|
||||||
|
from typing import Iterable
|
||||||
|
|
||||||
|
from ospurge.resources import base
|
||||||
|
from ospurge.resources.base import BaseServiceResource
|
||||||
|
|
||||||
|
|
||||||
|
class ListImagesMixin(BaseServiceResource):
|
||||||
|
def list_images_by_owner(self) -> Iterable[Dict[str, Any]]:
|
||||||
|
images = []
|
||||||
|
for image in self.cloud.list_images():
|
||||||
|
if image['owner'] != self.cleanup_project_id:
|
||||||
|
continue
|
||||||
|
|
||||||
|
is_public = image.get('is_public', False)
|
||||||
|
visibility = image.get('visibility', "")
|
||||||
|
if is_public is True or visibility == 'public':
|
||||||
|
if self.options.delete_shared_resources is False:
|
||||||
|
continue
|
||||||
|
|
||||||
|
images.append(image)
|
||||||
|
|
||||||
|
return images
|
||||||
|
|
||||||
|
|
||||||
|
class Images(base.ServiceResource, ListImagesMixin):
|
||||||
|
ORDER = 53
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
return self.list_images_by_owner()
|
||||||
|
|
||||||
|
def should_delete(self, resource: Dict[str, Any]) -> bool:
|
||||||
|
return resource['owner'] == self.cleanup_project_id
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_image(resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Image (id='{}', name='{}')".format(
|
||||||
|
resource['id'], resource['name'])
|
149
ospurge/resources/neutron.py
Normal file
149
ospurge/resources/neutron.py
Normal file
@ -0,0 +1,149 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
from typing import Any
|
||||||
|
from typing import Dict
|
||||||
|
from typing import Iterable
|
||||||
|
|
||||||
|
from ospurge.resources import base
|
||||||
|
|
||||||
|
|
||||||
|
class FloatingIPs(base.ServiceResource):
|
||||||
|
ORDER = 25
|
||||||
|
|
||||||
|
def check_prerequisite(self) -> bool:
|
||||||
|
# We can't delete a FIP if it's attached
|
||||||
|
return self.cloud.list_servers() == []
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
return self.cloud.search_floating_ips(filters={
|
||||||
|
'tenant_id': self.cleanup_project_id
|
||||||
|
})
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_floating_ip(resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Floating IP (id='{}')".format(resource['id'])
|
||||||
|
|
||||||
|
|
||||||
|
class RouterInterfaces(base.ServiceResource):
|
||||||
|
ORDER = 42
|
||||||
|
|
||||||
|
def check_prerequisite(self) -> bool:
|
||||||
|
return (self.cloud.list_servers() == [] and
|
||||||
|
self.cloud.search_floating_ips(
|
||||||
|
filters={'tenant_id': self.cleanup_project_id}
|
||||||
|
) == [])
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
return self.cloud.list_ports(
|
||||||
|
filters={'device_owner': 'network:router_interface',
|
||||||
|
'tenant_id': self.cleanup_project_id}
|
||||||
|
)
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.remove_router_interface({'id': resource['device_id']},
|
||||||
|
port_id=resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Router Interface (id='{}', router_id='{}')".format(
|
||||||
|
resource['id'], resource['device_id'])
|
||||||
|
|
||||||
|
|
||||||
|
class Routers(base.ServiceResource):
|
||||||
|
ORDER = 44
|
||||||
|
|
||||||
|
def check_prerequisite(self) -> bool:
|
||||||
|
return self.cloud.list_ports(
|
||||||
|
filters={'device_owner': 'network:router_interface',
|
||||||
|
'tenant_id': self.cleanup_project_id}
|
||||||
|
) == []
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
return self.cloud.list_routers()
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_router(resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Router (id='{}', name='{}')".format(
|
||||||
|
resource['id'], resource['name'])
|
||||||
|
|
||||||
|
|
||||||
|
class Ports(base.ServiceResource):
|
||||||
|
ORDER = 46
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
ports = self.cloud.list_ports(
|
||||||
|
filters={'tenant_id': self.cleanup_project_id}
|
||||||
|
)
|
||||||
|
excluded = ['network:dhcp', 'network:router_interface']
|
||||||
|
return [p for p in ports if p['device_owner'] not in excluded]
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_port(resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Port (id='{}', network_id='{}, device_owner='{}')'".format(
|
||||||
|
resource['id'], resource['network_id'], resource['device_owner'])
|
||||||
|
|
||||||
|
|
||||||
|
class Networks(base.ServiceResource):
|
||||||
|
ORDER = 48
|
||||||
|
|
||||||
|
def check_prerequisite(self) -> bool:
|
||||||
|
ports = self.cloud.list_ports(
|
||||||
|
filters={'tenant_id': self.cleanup_project_id}
|
||||||
|
)
|
||||||
|
excluded = ['network:dhcp']
|
||||||
|
return [p for p in ports if p['device_owner'] not in excluded] == []
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
networks = []
|
||||||
|
for network in self.cloud.list_networks(
|
||||||
|
filters={'tenant_id': self.cleanup_project_id}
|
||||||
|
):
|
||||||
|
if network['router:external'] is True:
|
||||||
|
if not self.options.delete_shared_resources:
|
||||||
|
continue
|
||||||
|
networks.append(network)
|
||||||
|
|
||||||
|
return networks
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_network(resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Network (id='{}', name='{}')".format(
|
||||||
|
resource['id'], resource['name'])
|
||||||
|
|
||||||
|
|
||||||
|
class SecurityGroups(base.ServiceResource):
|
||||||
|
ORDER = 49
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
return [sg for sg in self.cloud.list_security_groups(
|
||||||
|
filters={'tenant_id': self.cleanup_project_id})
|
||||||
|
if sg['name'] != 'default']
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_security_group(resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Security Group (id='{}', name='{}')".format(
|
||||||
|
resource['id'], resource['name'])
|
31
ospurge/resources/nova.py
Normal file
31
ospurge/resources/nova.py
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
from typing import Any
|
||||||
|
from typing import Dict
|
||||||
|
from typing import Iterable
|
||||||
|
|
||||||
|
from ospurge.resources import base
|
||||||
|
|
||||||
|
|
||||||
|
class Servers(base.ServiceResource):
|
||||||
|
ORDER = 15
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
return self.cloud.list_servers()
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_server(resource['id'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "VM (id='{}', name='{}')".format(
|
||||||
|
resource['id'], resource['name'])
|
63
ospurge/resources/swift.py
Normal file
63
ospurge/resources/swift.py
Normal file
@ -0,0 +1,63 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
from typing import Any
|
||||||
|
from typing import Dict
|
||||||
|
from typing import Iterable
|
||||||
|
from typing import Iterator
|
||||||
|
|
||||||
|
from ospurge.resources import base
|
||||||
|
from ospurge.resources.base import BaseServiceResource
|
||||||
|
from ospurge.resources import glance
|
||||||
|
|
||||||
|
|
||||||
|
class ListObjectsMixin(BaseServiceResource):
|
||||||
|
def list_objects(self) -> Iterator[Dict[str, Any]]:
|
||||||
|
for container in self.cloud.list_containers():
|
||||||
|
for obj in self.cloud.list_objects(container['name']):
|
||||||
|
obj['container_name'] = container['name']
|
||||||
|
yield obj
|
||||||
|
|
||||||
|
|
||||||
|
class Objects(base.ServiceResource, glance.ListImagesMixin, ListObjectsMixin):
|
||||||
|
ORDER = 73
|
||||||
|
|
||||||
|
def check_prerequisite(self) -> bool:
|
||||||
|
return (self.list_images_by_owner() == [] and
|
||||||
|
self.cloud.list_volume_backups() == [])
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
yield from self.list_objects()
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_object(resource['container_name'], resource['name'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Object '{}' from Container '{}'".format(
|
||||||
|
resource['name'], resource['container_name'])
|
||||||
|
|
||||||
|
|
||||||
|
class Containers(base.ServiceResource, ListObjectsMixin):
|
||||||
|
ORDER = 75
|
||||||
|
|
||||||
|
def check_prerequisite(self) -> bool:
|
||||||
|
return list(self.list_objects()) == []
|
||||||
|
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
return self.cloud.list_containers()
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
self.cloud.delete_container(resource['name'])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
return "Container (name='{}')".format(resource['name'])
|
File diff suppressed because it is too large
Load Diff
0
ospurge/tests/resources/__init__.py
Normal file
0
ospurge/tests/resources/__init__.py
Normal file
238
ospurge/tests/resources/test_base.py
Normal file
238
ospurge/tests/resources/test_base.py
Normal file
@ -0,0 +1,238 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import time
|
||||||
|
from typing import Any
|
||||||
|
from typing import Dict
|
||||||
|
from typing import Iterable
|
||||||
|
import unittest
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
|
from ospurge import exceptions
|
||||||
|
from ospurge.resources import base
|
||||||
|
|
||||||
|
|
||||||
|
def generate_timeout_series(timeout):
|
||||||
|
"""Generate a series of times that exceeds the given timeout.
|
||||||
|
Yields a series of fake time.time() floating point numbers
|
||||||
|
such that the difference between each pair in the series just
|
||||||
|
exceeds the timeout value that is passed in. Useful for
|
||||||
|
mocking time.time() in methods that otherwise wait for timeout
|
||||||
|
seconds.
|
||||||
|
"""
|
||||||
|
iteration = 0
|
||||||
|
while True:
|
||||||
|
iteration += 1
|
||||||
|
yield (iteration * timeout) + iteration
|
||||||
|
|
||||||
|
|
||||||
|
class SignatureMismatch(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class WrongMethodDefOrder(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@mock.patch('logging.warning', mock.Mock(side_effect=SignatureMismatch))
|
||||||
|
class TestMatchSignaturesMeta(unittest.TestCase):
|
||||||
|
class Test(metaclass=base.MatchSignaturesMeta):
|
||||||
|
def a(self, arg1):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def b(self, arg1=True):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def c(self, arg1, arg2):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def _private(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def test_nominal(self):
|
||||||
|
class Foo1(self.Test):
|
||||||
|
def a(self, arg1):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class Foo2(self.Test):
|
||||||
|
def b(self, arg1=True):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class Foo3(self.Test):
|
||||||
|
def c(self, arg1, arg2):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class Foo4(self.Test):
|
||||||
|
def _startswith_underscore(self, arg1, arg2):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class Foo5(self.Test):
|
||||||
|
def new_method(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def test_method_arg1_has_different_name(self):
|
||||||
|
with self.assertRaises(SignatureMismatch):
|
||||||
|
class Foo(self.Test):
|
||||||
|
def a(self, other_name):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def test_method_arg1_has_different_value(self):
|
||||||
|
with self.assertRaises(SignatureMismatch):
|
||||||
|
class Foo(self.Test):
|
||||||
|
def b(self, arg1=False):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def test_method_has_different_number_of_args(self):
|
||||||
|
with self.assertRaises(SignatureMismatch):
|
||||||
|
class Foo(self.Test):
|
||||||
|
def c(self, arg1, arg2, arg3):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@mock.patch('logging.warning', mock.Mock(side_effect=WrongMethodDefOrder))
|
||||||
|
class TestOrderedMeta(unittest.TestCase):
|
||||||
|
class Test(base.OrderedMeta):
|
||||||
|
ordered_methods = ['a', 'b']
|
||||||
|
|
||||||
|
def test_nominal(self):
|
||||||
|
class Foo1(metaclass=self.Test):
|
||||||
|
def a(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class Foo2(metaclass=self.Test):
|
||||||
|
def b(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class Foo3(metaclass=self.Test):
|
||||||
|
def a(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def b(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class Foo4(metaclass=self.Test):
|
||||||
|
def a(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def other(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def b(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def test_wrong_order(self):
|
||||||
|
with self.assertRaises(WrongMethodDefOrder):
|
||||||
|
class Foo(metaclass=self.Test):
|
||||||
|
def b(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def a(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class TestServiceResource(unittest.TestCase):
|
||||||
|
def test_init_without_order_attr(self):
|
||||||
|
class Foo(base.ServiceResource):
|
||||||
|
def list(self) -> Iterable:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def delete(self, resource: Dict[str, Any]) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def to_str(resource: Dict[str, Any]) -> str:
|
||||||
|
pass
|
||||||
|
|
||||||
|
self.assertRaisesRegex(ValueError, 'Class .*ORDER.*',
|
||||||
|
Foo, mock.Mock())
|
||||||
|
|
||||||
|
def test_instantiate_without_concrete_methods(self):
|
||||||
|
class Foo(base.ServiceResource):
|
||||||
|
ORDER = 1
|
||||||
|
|
||||||
|
self.assertRaises(TypeError, Foo)
|
||||||
|
|
||||||
|
@mock.patch.multiple(base.ServiceResource, ORDER=12,
|
||||||
|
__abstractmethods__=set())
|
||||||
|
def test_instantiate_nominal(self):
|
||||||
|
creds_manager = mock.Mock()
|
||||||
|
resource_manager = base.ServiceResource(creds_manager)
|
||||||
|
|
||||||
|
self.assertEqual(resource_manager.cloud, creds_manager.cloud)
|
||||||
|
self.assertEqual(resource_manager.options, creds_manager.options)
|
||||||
|
self.assertEqual(resource_manager.cleanup_project_id,
|
||||||
|
creds_manager.project_id)
|
||||||
|
|
||||||
|
self.assertEqual(12, resource_manager.order())
|
||||||
|
self.assertEqual(True, resource_manager.check_prerequisite())
|
||||||
|
|
||||||
|
self.assertRaises(NotImplementedError, resource_manager.delete, '')
|
||||||
|
self.assertRaises(NotImplementedError, resource_manager.to_str, '')
|
||||||
|
self.assertRaises(NotImplementedError, resource_manager.list)
|
||||||
|
|
||||||
|
@mock.patch.multiple(base.ServiceResource, ORDER=12,
|
||||||
|
__abstractmethods__=set())
|
||||||
|
def test_should_delete(self):
|
||||||
|
creds_manager = mock.Mock()
|
||||||
|
resource_manager = base.ServiceResource(creds_manager)
|
||||||
|
|
||||||
|
resource = mock.Mock(get=mock.Mock(side_effect=[None, None]))
|
||||||
|
self.assertEqual(True, resource_manager.should_delete(resource))
|
||||||
|
resource.get.call_args = [mock.call('project_id'),
|
||||||
|
mock.call('tenant_id')]
|
||||||
|
|
||||||
|
resource.get.side_effect = ["Foo", "Bar"]
|
||||||
|
self.assertEqual(False, resource_manager.should_delete(resource))
|
||||||
|
|
||||||
|
resource.get.side_effect = [42, resource_manager.cleanup_project_id]
|
||||||
|
self.assertEqual(True, resource_manager.should_delete(resource))
|
||||||
|
|
||||||
|
@mock.patch('time.sleep', autospec=True)
|
||||||
|
@mock.patch.multiple(base.ServiceResource, ORDER=12,
|
||||||
|
__abstractmethods__=set())
|
||||||
|
@mock.patch.object(base.ServiceResource, 'check_prerequisite',
|
||||||
|
return_value=False)
|
||||||
|
def test_wait_for_check_prerequisite_runtimeerror(
|
||||||
|
self, mock_check_prerequisite, mock_sleep):
|
||||||
|
resource_manager = base.ServiceResource(mock.Mock())
|
||||||
|
mock_exit = mock.Mock(is_set=mock.Mock(return_value=False))
|
||||||
|
|
||||||
|
with mock.patch('time.time') as mock_time:
|
||||||
|
mock_time.side_effect = generate_timeout_series(30)
|
||||||
|
self.assertRaisesRegex(
|
||||||
|
exceptions.TimeoutError, "^Timeout exceeded .*",
|
||||||
|
resource_manager.wait_for_check_prerequisite, mock_exit
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertEqual(mock_check_prerequisite.call_args_list,
|
||||||
|
[mock.call()] * (120 // 30 - 1))
|
||||||
|
self.assertEqual(mock_sleep.call_args_list,
|
||||||
|
[mock.call(i) for i in (2, 4, 8)])
|
||||||
|
|
||||||
|
mock_sleep.reset_mock()
|
||||||
|
mock_check_prerequisite.reset_mock()
|
||||||
|
mock_exit.is_set.return_value = True
|
||||||
|
self.assertRaisesRegex(
|
||||||
|
RuntimeError, ".* exited because it was interrupted .*",
|
||||||
|
resource_manager.wait_for_check_prerequisite, mock_exit
|
||||||
|
)
|
||||||
|
|
||||||
|
@mock.patch('time.sleep', mock.Mock(spec_set=time.sleep))
|
||||||
|
@mock.patch.multiple(base.ServiceResource, ORDER=12,
|
||||||
|
__abstractmethods__=set())
|
||||||
|
def test_wait_for_check_prerequisite_nominal(self):
|
||||||
|
resource_manager = base.ServiceResource(mock.Mock())
|
||||||
|
|
||||||
|
with mock.patch.object(resource_manager, 'check_prerequisite') as m:
|
||||||
|
m.side_effect = [False, False, True]
|
||||||
|
resource_manager.wait_for_check_prerequisite(
|
||||||
|
mock.Mock(is_set=mock.Mock(return_value=False)))
|
||||||
|
|
||||||
|
self.assertEqual(3, m.call_count)
|
103
ospurge/tests/resources/test_cinder.py
Normal file
103
ospurge/tests/resources/test_cinder.py
Normal file
@ -0,0 +1,103 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import unittest
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
|
import shade
|
||||||
|
|
||||||
|
from ospurge.resources import cinder
|
||||||
|
|
||||||
|
|
||||||
|
class TestBackups(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.assertIs(self.cloud.list_volume_backups.return_value,
|
||||||
|
cinder.Backups(self.creds_manager).list())
|
||||||
|
self.cloud.list_volume_backups.assert_called_once_with()
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
backup = mock.MagicMock()
|
||||||
|
self.assertIsNone(cinder.Backups(self.creds_manager).delete(backup))
|
||||||
|
self.cloud.delete_volume_backup.assert_called_once_with(backup['id'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
backup = mock.MagicMock()
|
||||||
|
self.assertIn("Volume Backup",
|
||||||
|
cinder.Backups(self.creds_manager).to_str(backup))
|
||||||
|
|
||||||
|
|
||||||
|
class TestSnapshots(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.assertIs(self.cloud.list_volume_snapshots.return_value,
|
||||||
|
cinder.Snapshots(self.creds_manager).list())
|
||||||
|
self.cloud.list_volume_snapshots.assert_called_once_with()
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
snapshot = mock.MagicMock()
|
||||||
|
self.assertIsNone(
|
||||||
|
cinder.Snapshots(self.creds_manager).delete(snapshot))
|
||||||
|
self.cloud.delete_volume_snapshot.assert_called_once_with(
|
||||||
|
snapshot['id'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
snapshot = mock.MagicMock()
|
||||||
|
self.assertIn("Volume Snapshot ",
|
||||||
|
cinder.Snapshots(self.creds_manager).to_str(snapshot))
|
||||||
|
|
||||||
|
|
||||||
|
class TestVolumes(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud, project_id=42)
|
||||||
|
|
||||||
|
def test_check_prerequisite(self):
|
||||||
|
self.cloud.list_volume_snapshots.return_value = []
|
||||||
|
self.assertEqual(
|
||||||
|
False,
|
||||||
|
cinder.Volumes(self.creds_manager).check_prerequisite()
|
||||||
|
)
|
||||||
|
self.cloud.list_volume_snapshots.assert_called_once_with()
|
||||||
|
self.cloud.list_servers.assert_called_once_with()
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.assertIs(self.cloud.list_volumes.return_value,
|
||||||
|
cinder.Volumes(self.creds_manager).list())
|
||||||
|
self.cloud.list_volumes.assert_called_once_with()
|
||||||
|
|
||||||
|
def test_should_delete(self):
|
||||||
|
self.assertEqual(
|
||||||
|
False,
|
||||||
|
cinder.Volumes(self.creds_manager).should_delete(
|
||||||
|
{'os-vol-tenant-attr:tenant_id': 84})
|
||||||
|
)
|
||||||
|
self.assertEqual(
|
||||||
|
True,
|
||||||
|
cinder.Volumes(self.creds_manager).should_delete(
|
||||||
|
{'os-vol-tenant-attr:tenant_id': 42})
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
volume = mock.MagicMock()
|
||||||
|
self.assertIsNone(cinder.Volumes(self.creds_manager).delete(volume))
|
||||||
|
self.cloud.delete_volume.assert_called_once_with(volume['id'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
volume = mock.MagicMock()
|
||||||
|
self.assertIn("Volume ",
|
||||||
|
cinder.Volumes(self.creds_manager).to_str(volume))
|
85
ospurge/tests/resources/test_glance.py
Normal file
85
ospurge/tests/resources/test_glance.py
Normal file
@ -0,0 +1,85 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import unittest
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
|
import shade
|
||||||
|
|
||||||
|
from ospurge.resources import glance
|
||||||
|
|
||||||
|
|
||||||
|
class TestListImagesMixin(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.img_lister = glance.ListImagesMixin()
|
||||||
|
self.img_lister.cloud = self.cloud
|
||||||
|
self.img_lister.cleanup_project_id = 42
|
||||||
|
self.img_lister.options = None
|
||||||
|
|
||||||
|
def test_list_images_by_owner_no_image(self):
|
||||||
|
self.cloud.list_images.return_value = []
|
||||||
|
self.assertEqual([], self.img_lister.list_images_by_owner())
|
||||||
|
|
||||||
|
def test_list_images_by_owner_different_owner(self):
|
||||||
|
self.cloud.list_images.return_value = [
|
||||||
|
{'owner': 84},
|
||||||
|
{'owner': 85}
|
||||||
|
]
|
||||||
|
self.assertEqual([], self.img_lister.list_images_by_owner())
|
||||||
|
|
||||||
|
def test_list_images_by_owner_public_images(self):
|
||||||
|
self.cloud.list_images.return_value = [
|
||||||
|
{'owner': 42, 'is_public': True},
|
||||||
|
{'owner': 42, 'visibility': 'public'},
|
||||||
|
]
|
||||||
|
with mock.patch.object(self.img_lister, 'options',
|
||||||
|
mock.Mock(delete_shared_resources=True)):
|
||||||
|
self.assertEqual(self.cloud.list_images.return_value,
|
||||||
|
self.img_lister.list_images_by_owner())
|
||||||
|
|
||||||
|
with mock.patch.object(self.img_lister, 'options',
|
||||||
|
mock.Mock(delete_shared_resources=False)):
|
||||||
|
self.assertEqual([], self.img_lister.list_images_by_owner())
|
||||||
|
|
||||||
|
|
||||||
|
class TestImages(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud, project_id=42)
|
||||||
|
|
||||||
|
@mock.patch.object(glance.ListImagesMixin, 'list_images_by_owner')
|
||||||
|
def test_list(self, mock_list_images_by_owner):
|
||||||
|
self.assertIs(mock_list_images_by_owner.return_value,
|
||||||
|
glance.Images(self.creds_manager).list())
|
||||||
|
mock_list_images_by_owner.assert_called_once_with()
|
||||||
|
|
||||||
|
def test_should_delete(self):
|
||||||
|
self.assertEqual(
|
||||||
|
False,
|
||||||
|
glance.Images(self.creds_manager).should_delete(
|
||||||
|
{'owner': 84})
|
||||||
|
)
|
||||||
|
self.assertEqual(
|
||||||
|
True,
|
||||||
|
glance.Images(self.creds_manager).should_delete(
|
||||||
|
{'owner': 42})
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
image = mock.MagicMock()
|
||||||
|
self.assertIsNone(glance.Images(self.creds_manager).delete(image))
|
||||||
|
self.cloud.delete_image.assert_called_once_with(image['id'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
image = mock.MagicMock()
|
||||||
|
self.assertIn("Image (",
|
||||||
|
glance.Images(self.creds_manager).to_str(image))
|
233
ospurge/tests/resources/test_neutron.py
Normal file
233
ospurge/tests/resources/test_neutron.py
Normal file
@ -0,0 +1,233 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import unittest
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
|
import shade
|
||||||
|
|
||||||
|
from ospurge.resources import neutron
|
||||||
|
|
||||||
|
|
||||||
|
class TestFloatingIPs(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
def test_check_prerequisite(self):
|
||||||
|
self.cloud.list_servers.return_value = ['vm1']
|
||||||
|
self.assertEqual(
|
||||||
|
False,
|
||||||
|
neutron.FloatingIPs(self.creds_manager).check_prerequisite()
|
||||||
|
)
|
||||||
|
self.cloud.list_servers.return_value = []
|
||||||
|
self.assertEqual(
|
||||||
|
True,
|
||||||
|
neutron.FloatingIPs(self.creds_manager).check_prerequisite()
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.assertIs(self.cloud.search_floating_ips.return_value,
|
||||||
|
neutron.FloatingIPs(self.creds_manager).list())
|
||||||
|
self.cloud.search_floating_ips.assert_called_once_with(
|
||||||
|
filters={'tenant_id': self.creds_manager.project_id}
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
fip = mock.MagicMock()
|
||||||
|
self.assertIsNone(neutron.FloatingIPs(self.creds_manager).delete(fip))
|
||||||
|
self.cloud.delete_floating_ip.assert_called_once_with(
|
||||||
|
fip['id'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
fip = mock.MagicMock()
|
||||||
|
self.assertIn("Floating IP ",
|
||||||
|
neutron.FloatingIPs(self.creds_manager).to_str(fip))
|
||||||
|
|
||||||
|
|
||||||
|
class TestRouterInterfaces(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
def test_check_prerequisite(self):
|
||||||
|
ifaces_manager = neutron.RouterInterfaces(self.creds_manager)
|
||||||
|
|
||||||
|
self.cloud.list_servers.return_value = []
|
||||||
|
self.cloud.search_floating_ips.return_value = ["foo"]
|
||||||
|
self.assertEqual(False, ifaces_manager.check_prerequisite())
|
||||||
|
|
||||||
|
self.cloud.search_floating_ips.return_value = []
|
||||||
|
self.assertEqual(True, ifaces_manager.check_prerequisite())
|
||||||
|
|
||||||
|
self.cloud.list_servers.return_value = ["bar"]
|
||||||
|
self.assertEqual(False, ifaces_manager.check_prerequisite())
|
||||||
|
|
||||||
|
self.cloud.search_floating_ips.assert_called_with(
|
||||||
|
filters={'tenant_id': self.creds_manager.project_id}
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.assertIs(self.cloud.list_ports.return_value,
|
||||||
|
neutron.RouterInterfaces(self.creds_manager).list())
|
||||||
|
self.cloud.list_ports.assert_called_once_with(
|
||||||
|
filters={'device_owner': 'network:router_interface',
|
||||||
|
'tenant_id': self.creds_manager.project_id}
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
iface = mock.MagicMock()
|
||||||
|
self.assertIsNone(neutron.RouterInterfaces(self.creds_manager).delete(
|
||||||
|
iface))
|
||||||
|
self.cloud.remove_router_interface.assert_called_once_with(
|
||||||
|
{'id': iface['device_id']},
|
||||||
|
port_id=iface['id']
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
iface = mock.MagicMock()
|
||||||
|
self.assertIn(
|
||||||
|
"Router Interface (",
|
||||||
|
neutron.RouterInterfaces(self.creds_manager).to_str(iface)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestRouters(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
def test_check_prerequisite(self):
|
||||||
|
self.cloud.list_ports.return_value = []
|
||||||
|
self.assertEqual(
|
||||||
|
True, neutron.Routers(self.creds_manager).check_prerequisite())
|
||||||
|
|
||||||
|
self.cloud.list_ports.return_value = ['foo']
|
||||||
|
self.assertEqual(
|
||||||
|
False, neutron.Routers(self.creds_manager).check_prerequisite())
|
||||||
|
|
||||||
|
self.cloud.list_ports.assert_called_with(
|
||||||
|
filters={'device_owner': 'network:router_interface',
|
||||||
|
'tenant_id': self.creds_manager.project_id}
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.assertIs(self.cloud.list_routers.return_value,
|
||||||
|
neutron.Routers(self.creds_manager).list())
|
||||||
|
self.cloud.list_routers.assert_called_once_with()
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
router = mock.MagicMock()
|
||||||
|
self.assertIsNone(neutron.Routers(self.creds_manager).delete(router))
|
||||||
|
self.cloud.delete_router.assert_called_once_with(router['id'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
router = mock.MagicMock()
|
||||||
|
self.assertIn("Router (",
|
||||||
|
neutron.Routers(self.creds_manager).to_str(router))
|
||||||
|
|
||||||
|
|
||||||
|
class TestPorts(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.cloud.list_ports.return_value = [
|
||||||
|
{'device_owner': 'network:dhcp'},
|
||||||
|
{'device_owner': 'network:router_interface'},
|
||||||
|
{'device_owner': ''}
|
||||||
|
]
|
||||||
|
ports = neutron.Ports(self.creds_manager).list()
|
||||||
|
self.assertEqual([{'device_owner': ''}], ports)
|
||||||
|
self.cloud.list_ports.assert_called_once_with(
|
||||||
|
filters={'tenant_id': self.creds_manager.project_id})
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
port = mock.MagicMock()
|
||||||
|
self.assertIsNone(neutron.Ports(self.creds_manager).delete(port))
|
||||||
|
self.cloud.delete_port.assert_called_once_with(port['id'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
port = mock.MagicMock()
|
||||||
|
self.assertIn("Port (",
|
||||||
|
neutron.Ports(self.creds_manager).to_str(port))
|
||||||
|
|
||||||
|
|
||||||
|
class TestNetworks(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
def test_check_prerequisite(self):
|
||||||
|
self.cloud.list_ports.return_value = [{'device_owner': 'network:dhcp'}]
|
||||||
|
self.assertEqual(
|
||||||
|
True, neutron.Networks(self.creds_manager).check_prerequisite())
|
||||||
|
|
||||||
|
self.cloud.list_ports.return_value = [{'device_owner': 'compute:None'}]
|
||||||
|
self.assertEqual(
|
||||||
|
False, neutron.Networks(self.creds_manager).check_prerequisite())
|
||||||
|
|
||||||
|
self.cloud.list_ports.assert_called_with(
|
||||||
|
filters={'tenant_id': self.creds_manager.project_id}
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.creds_manager.options.delete_shared_resources = False
|
||||||
|
self.cloud.list_networks.return_value = [
|
||||||
|
{'router:external': True}, {'router:external': True}]
|
||||||
|
nw_list = neutron.Networks(self.creds_manager).list()
|
||||||
|
self.assertEqual(0, len(nw_list))
|
||||||
|
|
||||||
|
self.creds_manager.options.delete_shared_resources = True
|
||||||
|
nw_list = neutron.Networks(self.creds_manager).list()
|
||||||
|
self.assertEqual(2, len(nw_list))
|
||||||
|
|
||||||
|
self.cloud.list_networks.assert_called_with(
|
||||||
|
filters={'tenant_id': self.creds_manager.project_id}
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
nw = mock.MagicMock()
|
||||||
|
self.assertIsNone(neutron.Networks(self.creds_manager).delete(nw))
|
||||||
|
self.cloud.delete_network.assert_called_once_with(nw['id'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
nw = mock.MagicMock()
|
||||||
|
self.assertIn("Network (",
|
||||||
|
neutron.Networks(self.creds_manager).to_str(nw))
|
||||||
|
|
||||||
|
|
||||||
|
class TestSecurityGroups(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.cloud.list_security_groups.return_value = [
|
||||||
|
{'name': 'default'}, {'name': 'bar'}
|
||||||
|
]
|
||||||
|
self.assertEqual(
|
||||||
|
1, len(neutron.SecurityGroups(self.creds_manager).list()))
|
||||||
|
self.cloud.list_security_groups.assert_called_once_with(
|
||||||
|
filters={'tenant_id': self.creds_manager.project_id}
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
sg = mock.MagicMock()
|
||||||
|
self.assertIsNone(
|
||||||
|
neutron.SecurityGroups(self.creds_manager).delete(sg))
|
||||||
|
self.cloud.delete_security_group.assert_called_once_with(sg['id'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
sg = mock.MagicMock()
|
||||||
|
self.assertIn("Security Group (",
|
||||||
|
neutron.SecurityGroups(self.creds_manager).to_str(sg))
|
38
ospurge/tests/resources/test_nova.py
Normal file
38
ospurge/tests/resources/test_nova.py
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import unittest
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
|
import shade
|
||||||
|
|
||||||
|
from ospurge.resources import nova
|
||||||
|
|
||||||
|
|
||||||
|
class TestServers(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.assertIs(self.cloud.list_servers.return_value,
|
||||||
|
nova.Servers(self.creds_manager).list())
|
||||||
|
self.cloud.list_servers.assert_called_once_with()
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
server = mock.MagicMock()
|
||||||
|
self.assertIsNone(nova.Servers(self.creds_manager).delete(server))
|
||||||
|
self.cloud.delete_server.assert_called_once_with(server['id'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
server = mock.MagicMock()
|
||||||
|
self.assertIn("VM (",
|
||||||
|
nova.Servers(self.creds_manager).to_str(server))
|
122
ospurge/tests/resources/test_swift.py
Normal file
122
ospurge/tests/resources/test_swift.py
Normal file
@ -0,0 +1,122 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import unittest
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
|
import shade
|
||||||
|
|
||||||
|
from ospurge.resources import swift
|
||||||
|
|
||||||
|
|
||||||
|
class TestListObjectsMixin(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.obj_lister = swift.ListObjectsMixin()
|
||||||
|
self.obj_lister.cloud = self.cloud
|
||||||
|
|
||||||
|
def test_list_objects(self):
|
||||||
|
containers = [{"name": "foo"}, {"name": "bar"}]
|
||||||
|
objects = {
|
||||||
|
"foo": [{"name": "toto"}, {"name": "tata"}],
|
||||||
|
"bar": [{"name": "titi"}, {"name": "tutu"}]
|
||||||
|
}
|
||||||
|
|
||||||
|
def list_objects(container_name):
|
||||||
|
return objects[container_name]
|
||||||
|
|
||||||
|
self.cloud.list_containers.return_value = containers
|
||||||
|
self.cloud.list_objects.side_effect = list_objects
|
||||||
|
self.assertEqual(
|
||||||
|
[{'name': 'toto', 'container_name': 'foo'},
|
||||||
|
{'name': 'tata', 'container_name': 'foo'},
|
||||||
|
{'name': 'titi', 'container_name': 'bar'},
|
||||||
|
{'name': 'tutu', 'container_name': 'bar'}],
|
||||||
|
list(self.obj_lister.list_objects())
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestObjects(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
def test_check_prerequisite(self):
|
||||||
|
objects_manager = swift.Objects(self.creds_manager)
|
||||||
|
with mock.patch.object(objects_manager, 'list_images_by_owner') as m:
|
||||||
|
m.return_value = []
|
||||||
|
self.cloud.list_volume_backups.return_value = ["foo"]
|
||||||
|
self.assertEqual(False, objects_manager.check_prerequisite())
|
||||||
|
|
||||||
|
self.cloud.list_volume_backups.return_value = []
|
||||||
|
self.assertEqual(True, objects_manager.check_prerequisite())
|
||||||
|
|
||||||
|
m.return_value = ["bar"]
|
||||||
|
self.assertEqual(False, objects_manager.check_prerequisite())
|
||||||
|
|
||||||
|
@mock.patch('ospurge.resources.swift.ListObjectsMixin.list_objects')
|
||||||
|
def test_list(self, mock_list_objects):
|
||||||
|
def list_objects():
|
||||||
|
yield 1
|
||||||
|
yield 2
|
||||||
|
|
||||||
|
mock_list_objects.side_effect = list_objects
|
||||||
|
|
||||||
|
objects = swift.Objects(self.creds_manager).list()
|
||||||
|
self.assertEqual(1, next(objects))
|
||||||
|
self.assertEqual(2, next(objects))
|
||||||
|
self.assertRaises(StopIteration, next, objects)
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
obj = mock.MagicMock()
|
||||||
|
self.assertIsNone(swift.Objects(self.creds_manager).delete(obj))
|
||||||
|
self.cloud.delete_object.assert_called_once_with(
|
||||||
|
obj['container_name'], obj['name'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
obj = mock.MagicMock()
|
||||||
|
self.assertIn("Object '",
|
||||||
|
swift.Objects(self.creds_manager).to_str(obj))
|
||||||
|
|
||||||
|
|
||||||
|
class TestContainers(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.cloud = mock.Mock(spec_set=shade.openstackcloud.OpenStackCloud)
|
||||||
|
self.creds_manager = mock.Mock(cloud=self.cloud)
|
||||||
|
|
||||||
|
@mock.patch('ospurge.resources.swift.ListObjectsMixin.list_objects')
|
||||||
|
def test_check_prerequisite(self, mock_list_objects):
|
||||||
|
mock_list_objects.return_value = ['obj1']
|
||||||
|
self.assertEqual(
|
||||||
|
False,
|
||||||
|
swift.Containers(self.creds_manager).check_prerequisite()
|
||||||
|
)
|
||||||
|
mock_list_objects.return_value = []
|
||||||
|
self.assertEqual(
|
||||||
|
True,
|
||||||
|
swift.Containers(self.creds_manager).check_prerequisite()
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_list(self):
|
||||||
|
self.assertIs(self.cloud.list_containers.return_value,
|
||||||
|
swift.Containers(self.creds_manager).list())
|
||||||
|
self.cloud.list_containers.assert_called_once_with()
|
||||||
|
|
||||||
|
def test_delete(self):
|
||||||
|
cont = mock.MagicMock()
|
||||||
|
self.assertIsNone(swift.Containers(self.creds_manager).delete(cont))
|
||||||
|
self.cloud.delete_container.assert_called_once_with(cont['name'])
|
||||||
|
|
||||||
|
def test_to_string(self):
|
||||||
|
container = mock.MagicMock()
|
||||||
|
self.assertIn("Container (",
|
||||||
|
swift.Containers(self.creds_manager).to_str(
|
||||||
|
container))
|
@ -1,810 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
# -*- encoding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# This software is released under the MIT License.
|
|
||||||
#
|
|
||||||
# Copyright (c) 2014 Cloudwatt
|
|
||||||
#
|
|
||||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
# of this software and associated documentation files (the "Software"), to deal
|
|
||||||
# in the Software without restriction, including without limitation the rights
|
|
||||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
# copies of the Software, and to permit persons to whom the Software is
|
|
||||||
# furnished to do so, subject to the following conditions:
|
|
||||||
#
|
|
||||||
# The above copyright notice and this permission notice shall be included in all
|
|
||||||
# copies or substantial portions of the Software.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
|
|
||||||
import itertools
|
|
||||||
import json as jsonutils
|
|
||||||
|
|
||||||
import httpretty
|
|
||||||
import testtools
|
|
||||||
|
|
||||||
import cinderclient
|
|
||||||
|
|
||||||
from ospurge import base
|
|
||||||
from ospurge import client
|
|
||||||
from ospurge.tests import client_fixtures
|
|
||||||
|
|
||||||
# Disable InsecurePlatformWarning which is irrelevant in unittests with
|
|
||||||
# mocked https requests and only clutters the results.
|
|
||||||
import requests
|
|
||||||
requests.packages.urllib3.disable_warnings()
|
|
||||||
|
|
||||||
|
|
||||||
USERNAME = "username"
|
|
||||||
PASSWORD = "password"
|
|
||||||
PROJECT_NAME = "project"
|
|
||||||
AUTH_URL = client_fixtures.AUTH_URL
|
|
||||||
|
|
||||||
|
|
||||||
class HttpTest(testtools.TestCase):
|
|
||||||
|
|
||||||
def stub_url(self, method, parts=None, base_url=None, json=None, **kwargs):
|
|
||||||
if not base_url:
|
|
||||||
base_url = self.TEST_URL
|
|
||||||
if json is not None:
|
|
||||||
kwargs['body'] = jsonutils.dumps(json)
|
|
||||||
kwargs['content_type'] = 'application/json'
|
|
||||||
if parts:
|
|
||||||
url = '/'.join([p.strip('/') for p in [base_url] + parts])
|
|
||||||
else:
|
|
||||||
url = base_url
|
|
||||||
httpretty.register_uri(method, url, **kwargs)
|
|
||||||
|
|
||||||
def stub_auth(self):
|
|
||||||
self.stub_url('GET', base_url=AUTH_URL,
|
|
||||||
json=client_fixtures.AUTH_URL_RESPONSE)
|
|
||||||
self.stub_url('POST', parts=['tokens'], base_url=AUTH_URL,
|
|
||||||
json=client_fixtures.PROJECT_SCOPED_TOKEN)
|
|
||||||
self.stub_url('GET', parts=['roles'],
|
|
||||||
base_url=client_fixtures.ROLE_URL,
|
|
||||||
json=client_fixtures.ROLE_LIST)
|
|
||||||
|
|
||||||
|
|
||||||
class SessionTest(HttpTest):
|
|
||||||
|
|
||||||
@httpretty.activate
|
|
||||||
def test_init(self):
|
|
||||||
self.stub_auth()
|
|
||||||
session = base.Session(USERNAME, PASSWORD,
|
|
||||||
client_fixtures.PROJECT_ID, AUTH_URL,
|
|
||||||
region_name="RegionOne")
|
|
||||||
self.assertEqual(session.token, client_fixtures.TOKEN_ID)
|
|
||||||
self.assertEqual(session.user_id, client_fixtures.USER_ID)
|
|
||||||
self.assertEqual(session.project_id, client_fixtures.PROJECT_ID)
|
|
||||||
self.assertTrue(session.is_admin)
|
|
||||||
|
|
||||||
@httpretty.activate
|
|
||||||
def test_get_public_endpoint(self):
|
|
||||||
self.stub_auth()
|
|
||||||
session = base.Session(USERNAME, PASSWORD,
|
|
||||||
client_fixtures.PROJECT_ID, AUTH_URL,
|
|
||||||
region_name="RegionOne")
|
|
||||||
endpoint = session.get_endpoint('volume')
|
|
||||||
self.assertEqual(endpoint, client_fixtures.VOLUME_PUBLIC_ENDPOINT)
|
|
||||||
endpoint = session.get_endpoint('image')
|
|
||||||
self.assertEqual(endpoint, client_fixtures.IMAGE_PUBLIC_ENDPOINT)
|
|
||||||
|
|
||||||
@httpretty.activate
|
|
||||||
def test_get_internal_endpoint(self):
|
|
||||||
self.stub_auth()
|
|
||||||
session = base.Session(USERNAME, PASSWORD,
|
|
||||||
client_fixtures.PROJECT_ID, AUTH_URL,
|
|
||||||
region_name="RegionOne",
|
|
||||||
endpoint_type='internalURL')
|
|
||||||
endpoint = session.get_endpoint('volume')
|
|
||||||
self.assertEqual(endpoint, client_fixtures.VOLUME_INTERNAL_ENDPOINT)
|
|
||||||
endpoint = session.get_endpoint('image')
|
|
||||||
self.assertEqual(endpoint, client_fixtures.IMAGE_INTERNAL_ENDPOINT)
|
|
||||||
|
|
||||||
# Abstract class
|
|
||||||
|
|
||||||
|
|
||||||
class TestResourcesBase(HttpTest):
|
|
||||||
|
|
||||||
"""Creates a session object that can be used to test any service."""
|
|
||||||
@httpretty.activate
|
|
||||||
def setUp(self):
|
|
||||||
super(TestResourcesBase, self).setUp()
|
|
||||||
self.stub_auth()
|
|
||||||
self.session = base.Session(USERNAME, PASSWORD,
|
|
||||||
client_fixtures.PROJECT_ID, AUTH_URL,
|
|
||||||
region_name="RegionOne")
|
|
||||||
# We can't add other stubs in subclasses setUp because
|
|
||||||
# httpretty.dactivate() is called after this set_up (so during the
|
|
||||||
# super call to this method in subclasses). and extra stubs will not
|
|
||||||
# work. if you need extra stubs to be done during setUp, write them
|
|
||||||
# in an 'extra_set_up' method. instead of in the subclasses setUp
|
|
||||||
if hasattr(self, 'extra_set_up'):
|
|
||||||
self.extra_set_up()
|
|
||||||
|
|
||||||
@httpretty.activate
|
|
||||||
def _test_list(self):
|
|
||||||
self.stub_auth()
|
|
||||||
self.stub_list()
|
|
||||||
elts = list(self.resources.list())
|
|
||||||
# Some Openstack resources use attributes, while others use dicts
|
|
||||||
try:
|
|
||||||
ids = [elt.id for elt in elts]
|
|
||||||
except AttributeError:
|
|
||||||
ids = [elt['id'] for elt in elts]
|
|
||||||
self.assertEqual(self.IDS, ids)
|
|
||||||
|
|
||||||
@httpretty.activate
|
|
||||||
def _test_delete(self):
|
|
||||||
self.stub_auth()
|
|
||||||
self.stub_list()
|
|
||||||
self.stub_delete()
|
|
||||||
elts = self.resources.list()
|
|
||||||
# List() must return an iterable
|
|
||||||
res = itertools.islice(elts, 1).next()
|
|
||||||
self.resources.delete(res) # Checks this doesn't raise an exception
|
|
||||||
|
|
||||||
|
|
||||||
class TestSwiftBase(TestResourcesBase):
|
|
||||||
TEST_URL = client_fixtures.STORAGE_PUBLIC_ENDPOINT
|
|
||||||
|
|
||||||
|
|
||||||
class TestSwiftResources(TestSwiftBase):
|
|
||||||
|
|
||||||
@httpretty.activate
|
|
||||||
def test_list_containers(self):
|
|
||||||
self.stub_url('GET', json=client_fixtures.STORAGE_CONTAINERS_LIST)
|
|
||||||
swift = client.SwiftResources(self.session)
|
|
||||||
conts = list(swift.list_containers())
|
|
||||||
self.assertEqual(conts, client_fixtures.STORAGE_CONTAINERS)
|
|
||||||
|
|
||||||
|
|
||||||
class TestSwiftObjects(TestSwiftBase):
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url('GET', json=client_fixtures.STORAGE_CONTAINERS_LIST)
|
|
||||||
self.stub_url('GET', parts=[client_fixtures.STORAGE_CONTAINERS[0]],
|
|
||||||
json=client_fixtures.STORAGE_OBJECTS_LIST_0),
|
|
||||||
self.stub_url('GET', parts=[client_fixtures.STORAGE_CONTAINERS[1]],
|
|
||||||
json=client_fixtures.STORAGE_OBJECTS_LIST_1)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
for obj in client_fixtures.STORAGE_OBJECTS:
|
|
||||||
self.stub_url('DELETE', parts=[obj['container'], obj['name']])
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestSwiftObjects, self).setUp()
|
|
||||||
self.resources = client.SwiftObjects(self.session)
|
|
||||||
|
|
||||||
@httpretty.activate
|
|
||||||
def test_list(self):
|
|
||||||
self.stub_list()
|
|
||||||
objs = list(self.resources.list())
|
|
||||||
self.assertEqual(client_fixtures.STORAGE_OBJECTS, objs)
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestSwiftContainers(TestSwiftBase):
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url('GET', json=client_fixtures.STORAGE_CONTAINERS_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
self.stub_url('DELETE', parts=[client_fixtures.STORAGE_CONTAINERS[0]])
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestSwiftContainers, self).setUp()
|
|
||||||
self.resources = client.SwiftContainers(self.session)
|
|
||||||
|
|
||||||
@httpretty.activate
|
|
||||||
def test_list(self):
|
|
||||||
self.stub_list()
|
|
||||||
conts = list(self.resources.list())
|
|
||||||
self.assertEqual(conts, client_fixtures.STORAGE_CONTAINERS)
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestCinderBase(TestResourcesBase):
|
|
||||||
TEST_URL = client_fixtures.VOLUME_PUBLIC_ENDPOINT
|
|
||||||
|
|
||||||
|
|
||||||
class TestCinderSnapshots(TestCinderBase):
|
|
||||||
IDS = client_fixtures.SNAPSHOTS_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url('GET', parts=['snapshots', 'detail'],
|
|
||||||
json=client_fixtures.SNAPSHOTS_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
self.stub_url(
|
|
||||||
'DELETE', parts=['snapshots', client_fixtures.SNAPSHOTS_IDS[0]])
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestCinderSnapshots, self).setUp()
|
|
||||||
self.resources = client.CinderSnapshots(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestCinderVolumes(TestCinderBase):
|
|
||||||
IDS = client_fixtures.VOLUMES_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url('GET', parts=['volumes', 'detail'],
|
|
||||||
json=client_fixtures.VOLUMES_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
self.stub_url(
|
|
||||||
'DELETE', parts=['volumes', client_fixtures.VOLUMES_IDS[0]])
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestCinderVolumes, self).setUp()
|
|
||||||
self.resources = client.CinderVolumes(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestCinderBackups(TestCinderBase):
|
|
||||||
IDS = client_fixtures.VOLUME_BACKUP_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url('GET', parts=['backups', 'detail'],
|
|
||||||
json=client_fixtures.VOLUME_BACKUPS_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
self.stub_url(
|
|
||||||
'DELETE', parts=['backups', self.IDS[0]])
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestCinderBackups, self).setUp()
|
|
||||||
# Make sure tests work whatever version of cinderclient
|
|
||||||
self.versionstring_bak = cinderclient.version_info.version_string
|
|
||||||
cinderclient.version_info.version_string = lambda: '1.4.0'
|
|
||||||
self.session.is_admin = True
|
|
||||||
self.resources = client.CinderBackups(self.session)
|
|
||||||
|
|
||||||
def tearDown(self):
|
|
||||||
super(TestCinderBackups, self).tearDown()
|
|
||||||
cinderclient.version_info.version_string = self.versionstring_bak
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
def test_empty_list(self):
|
|
||||||
self.stub_auth()
|
|
||||||
versionstring_bak = cinderclient.version_info.version_string
|
|
||||||
cinderclient.version_info.version_string = lambda: '1.1.1'
|
|
||||||
self.assertEqual(self.resources.list(), [])
|
|
||||||
cinderclient.version_info.version_string = versionstring_bak
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronBase(TestResourcesBase):
|
|
||||||
TEST_URL = client_fixtures.NETWORK_PUBLIC_ENDPOINT
|
|
||||||
|
|
||||||
# Used both in TestNeutronRouters and TestNeutronInterfaces
|
|
||||||
def stub_list_routers(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'routers.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.ROUTERS_LIST
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronRouters(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.ROUTERS_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_list_routers()
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
routid = client_fixtures.ROUTERS_IDS[0]
|
|
||||||
self.stub_url('PUT', parts=['v2.0', 'routers', "%s.json" % routid],
|
|
||||||
json=client_fixtures.ROUTER_CLEAR_GATEWAY)
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'routers', "%s.json" % routid],
|
|
||||||
json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronRouters, self).setUp()
|
|
||||||
self.resources = client.NeutronRouters(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronInterfaces(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.PORTS_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_list_routers()
|
|
||||||
self.stub_url('GET', parts=['v2.0', "ports.json?device_id={}".format(client_fixtures.ROUTERS_IDS[0])],
|
|
||||||
json=client_fixtures.ROUTER0_PORTS)
|
|
||||||
self.stub_url('GET', parts=['v2.0', "ports.json?device_id={}".format(client_fixtures.ROUTERS_IDS[1])],
|
|
||||||
json=client_fixtures.ROUTER1_PORTS)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
for rout_id in client_fixtures.ROUTERS_IDS:
|
|
||||||
self.stub_url('PUT', parts=['v2.0', 'routers', rout_id,
|
|
||||||
'remove_router_interface.json'],
|
|
||||||
json=client_fixtures.REMOVE_ROUTER_INTERFACE)
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronInterfaces, self).setUp()
|
|
||||||
self.resources = client.NeutronInterfaces(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronPorts(TestNeutronBase):
|
|
||||||
IDS = [client_fixtures.UNBOUND_PORT_ID]
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'ports.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.NEUTRON_PORTS)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
port_id = client_fixtures.UNBOUND_PORT_ID
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'ports', "{}.json".format(port_id)],
|
|
||||||
json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronPorts, self).setUp()
|
|
||||||
self.resources = client.NeutronPorts(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronNetworks(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.NETWORKS_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'networks.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.NETWORKS_LIST
|
|
||||||
)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
for net_id in client_fixtures.NETWORKS_IDS:
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'networks',
|
|
||||||
"{}.json".format(net_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronNetworks, self).setUp()
|
|
||||||
self.resources = client.NeutronNetworks(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronSecgroups(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.SECGROUPS_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'security-groups.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.SECGROUPS_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
for secgroup_id in client_fixtures.SECGROUPS_IDS:
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'security-groups',
|
|
||||||
"{}.json".format(secgroup_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronSecgroups, self).setUp()
|
|
||||||
self.resources = client.NeutronSecgroups(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronFloatingIps(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.FLOATING_IPS_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'floatingips.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.FLOATING_IPS_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
ip_id = client_fixtures.FLOATING_IPS_IDS[0]
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'floatingips', "{}.json".format(ip_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronFloatingIps, self).setUp()
|
|
||||||
self.resources = client.NeutronFloatingIps(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronFireWallRule(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.FIREWALL_RULE_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'fw/firewall_rules.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.FIREWALL_RULE_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
firewall_rule_id = client_fixtures.FIREWALL_RULE_IDS[0]
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'fw/firewall_rules', "{}.json".format(firewall_rule_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronFireWallRule, self).setUp()
|
|
||||||
self.resources = client.NeutronFireWallRule(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronFireWallPolicy(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.FIREWALL_POLICY_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'fw/firewall_policies.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.FIREWALL_POLICY_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
firewall_policy_id = client_fixtures.FIREWALL_POLICY_IDS[0]
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'fw/firewall_policies', "{}.json".format(firewall_policy_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronFireWallPolicy, self).setUp()
|
|
||||||
self.resources = client.NeutronFireWallPolicy(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronFireWall(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.FIREWALL_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'fw/firewalls.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.FIREWALL_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
firewall_id = client_fixtures.FIREWALL_IDS[0]
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'fw/firewalls', "{}.json".format(firewall_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronFireWall, self).setUp()
|
|
||||||
self.resources = client.NeutronFireWall(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronMeteringLabel(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.METERING_LABEL_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'metering/metering-labels.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.METERING_LABEL_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
firewall_id = client_fixtures.METERING_LABEL_IDS[0]
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'metering/metering-labels', "{}.json".format(firewall_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronMeteringLabel, self).setUp()
|
|
||||||
self.resources = client.NeutronMeteringLabel(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronLbMembers(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.LBAAS_MEMBER_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'lb/members.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.LBAAS_MEMBER_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
lb_member_id = client_fixtures.LBAAS_MEMBER_IDS[0]
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'lb/members', "{}.json".format(lb_member_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronLbMembers, self).setUp()
|
|
||||||
self.resources = client.NeutronLbMembers(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronLbVip(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.LBAAS_VIP_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'lb/vips.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.LBAAS_VIP_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
lb_vip_id = client_fixtures.LBAAS_VIP_IDS[0]
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'lb/vips', "{}.json".format(lb_vip_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronLbVip, self).setUp()
|
|
||||||
self.resources = client.NeutronLbVip(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronLbHealthMonitor(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.LBAAS_HEALTHMONITOR_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'lb/health_monitors.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.LBAAS_HEALTHMONITOR_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
lb_healthmonitor_id = client_fixtures.LBAAS_HEALTHMONITOR_IDS[0]
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'lb/health_monitors', "{}.json".format(lb_healthmonitor_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronLbHealthMonitor, self).setUp()
|
|
||||||
self.resources = client.NeutronLbHealthMonitor(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNeutronLbPool(TestNeutronBase):
|
|
||||||
IDS = client_fixtures.LBAAS_POOL_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET',
|
|
||||||
parts=[
|
|
||||||
'v2.0',
|
|
||||||
'lb/pools.json?tenant_id=%s' % client_fixtures.PROJECT_ID
|
|
||||||
],
|
|
||||||
json=client_fixtures.LBAAS_POOL_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
lb_pool_id = client_fixtures.LBAAS_POOL_IDS[0]
|
|
||||||
self.stub_url('DELETE', parts=['v2.0', 'lb/pools', "{}.json".format(lb_pool_id)], json={})
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNeutronLbPool, self).setUp()
|
|
||||||
self.resources = client.NeutronLbPool(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestNovaServers(TestResourcesBase):
|
|
||||||
TEST_URL = client_fixtures.COMPUTE_PUBLIC_ENDPOINT
|
|
||||||
IDS = client_fixtures.SERVERS_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url('GET', parts=['servers', 'detail'],
|
|
||||||
json=client_fixtures.SERVERS_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
self.stub_url(
|
|
||||||
'DELETE', parts=['servers', client_fixtures.SERVERS_IDS[0]])
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNovaServers, self).setUp()
|
|
||||||
self.resources = client.NovaServers(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestGlanceImages(TestResourcesBase):
|
|
||||||
TEST_URL = client_fixtures.IMAGE_PUBLIC_ENDPOINT
|
|
||||||
IDS = client_fixtures.IMAGES_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url('GET', parts=['v1', 'images', 'detail'],
|
|
||||||
json=client_fixtures.IMAGES_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
self.stub_url(
|
|
||||||
'DELETE', parts=['v1', 'images', client_fixtures.IMAGES_IDS[0]])
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestGlanceImages, self).setUp()
|
|
||||||
self.resources = client.GlanceImages(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestCeilometerAlarms(TestResourcesBase):
|
|
||||||
TEST_URL = client_fixtures.METERING_PUBLIC_ENDPOINT
|
|
||||||
|
|
||||||
def extra_set_up(self):
|
|
||||||
self.stub_url(
|
|
||||||
'GET', base_url=AUTH_URL, json=client_fixtures.AUTH_URL_RESPONSE)
|
|
||||||
self.resources = client.CeilometerAlarms(self.session)
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url('GET', parts=['v2', 'alarms'],
|
|
||||||
json=client_fixtures.ALARMS_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
self.stub_url(
|
|
||||||
'DELETE', parts=['v2', 'alarms', client_fixtures.ALARMS_IDS[0]])
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestCeilometerAlarms, self).setUp()
|
|
||||||
|
|
||||||
@httpretty.activate
|
|
||||||
def test_list(self):
|
|
||||||
self.stub_auth()
|
|
||||||
self.stub_list()
|
|
||||||
elts = list(self.resources.list())
|
|
||||||
ids = [elt.alarm_id for elt in elts]
|
|
||||||
self.assertEqual(client_fixtures.ALARMS_IDS, ids)
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
|
|
||||||
class TestHeatStacks(TestResourcesBase):
|
|
||||||
TEST_URL = client_fixtures.ORCHESTRATION_PUBLIC_ENDPOINT
|
|
||||||
IDS = client_fixtures.STACKS_IDS
|
|
||||||
|
|
||||||
def stub_list(self):
|
|
||||||
self.stub_url('GET', parts=['stacks?'],
|
|
||||||
json=client_fixtures.STACKS_LIST)
|
|
||||||
|
|
||||||
def stub_delete(self):
|
|
||||||
self.stub_url(
|
|
||||||
'DELETE', parts=['stacks', client_fixtures.STACKS_IDS[0]])
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestHeatStacks, self).setUp()
|
|
||||||
self.resources = client.HeatStacks(self.session)
|
|
||||||
|
|
||||||
def test_list(self):
|
|
||||||
self._test_list()
|
|
||||||
|
|
||||||
def test_delete(self):
|
|
||||||
self._test_delete()
|
|
||||||
|
|
||||||
@httpretty.activate
|
|
||||||
def test_abandon(self):
|
|
||||||
self.stub_auth()
|
|
||||||
self.stub_list()
|
|
||||||
get_result = {'stack': client_fixtures.STACKS_LIST['stacks'][1]}
|
|
||||||
location = '%s/stacks/stack2/%s' % (self.TEST_URL,
|
|
||||||
client_fixtures.STACKS_IDS[1])
|
|
||||||
self.stub_url(
|
|
||||||
'GET', parts=['stacks', client_fixtures.STACKS_IDS[1]],
|
|
||||||
json=get_result, location=location)
|
|
||||||
self.stub_url(
|
|
||||||
'DELETE',
|
|
||||||
parts=['stacks', 'stack2', client_fixtures.STACKS_IDS[1],
|
|
||||||
'abandon'])
|
|
||||||
elts = list(self.resources.list())
|
|
||||||
self.resources.delete(elts[1])
|
|
261
ospurge/tests/test_main.py
Normal file
261
ospurge/tests/test_main.py
Normal file
@ -0,0 +1,261 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import types
|
||||||
|
import unittest
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
|
import shade.exc
|
||||||
|
|
||||||
|
from ospurge import exceptions
|
||||||
|
from ospurge import main
|
||||||
|
from ospurge.resources.base import ServiceResource
|
||||||
|
from ospurge import utils
|
||||||
|
|
||||||
|
|
||||||
|
class TestFunctions(unittest.TestCase):
|
||||||
|
@mock.patch('logging.basicConfig', autospec=True)
|
||||||
|
def test_configure_logging_verbose(self, m_basicConfig):
|
||||||
|
main.configure_logging(verbose=True)
|
||||||
|
m_basicConfig.assert_called_with(format=mock.ANY, level=logging.INFO)
|
||||||
|
|
||||||
|
@mock.patch('logging.basicConfig', autospec=True)
|
||||||
|
def test_configure_logging(self, m_basicConfig):
|
||||||
|
main.configure_logging(verbose=False)
|
||||||
|
m_basicConfig.assert_called_with(format=mock.ANY, level=logging.WARN)
|
||||||
|
|
||||||
|
def test_create_argument_parser_with_purge_project(self):
|
||||||
|
parser = main.create_argument_parser()
|
||||||
|
self.assertIsInstance(parser, argparse.ArgumentParser)
|
||||||
|
|
||||||
|
options = parser.parse_args([
|
||||||
|
'--verbose', '--dry-run', '--purge-project', 'foo',
|
||||||
|
'--delete-shared-resources'
|
||||||
|
])
|
||||||
|
self.assertEqual(True, options.verbose)
|
||||||
|
self.assertEqual(True, options.dry_run)
|
||||||
|
self.assertEqual(True, options.delete_shared_resources)
|
||||||
|
self.assertEqual('foo', options.purge_project)
|
||||||
|
|
||||||
|
def test_create_argument_parser_with_purge_own_project(self):
|
||||||
|
parser = main.create_argument_parser()
|
||||||
|
options = parser.parse_args(['--purge-own-project'])
|
||||||
|
|
||||||
|
self.assertEqual(False, options.verbose)
|
||||||
|
self.assertEqual(False, options.dry_run)
|
||||||
|
self.assertEqual(False, options.delete_shared_resources)
|
||||||
|
self.assertEqual(True, options.purge_own_project)
|
||||||
|
|
||||||
|
def test_runner(self):
|
||||||
|
resources = [mock.Mock(), mock.Mock(), mock.Mock()]
|
||||||
|
resource_manager = mock.Mock(list=mock.Mock(return_value=resources))
|
||||||
|
options = mock.Mock(dry_run=False)
|
||||||
|
exit = mock.Mock(is_set=mock.Mock(side_effect=[False, False, True]))
|
||||||
|
|
||||||
|
main.runner(resource_manager, options, exit)
|
||||||
|
|
||||||
|
resource_manager.list.assert_called_once_with()
|
||||||
|
resource_manager.wait_for_check_prerequisite.assert_called_once_with(
|
||||||
|
exit)
|
||||||
|
self.assertEqual(
|
||||||
|
[mock.call(resources[0]), mock.call(resources[1])],
|
||||||
|
resource_manager.should_delete.call_args_list
|
||||||
|
)
|
||||||
|
self.assertEqual(2, resource_manager.delete.call_count)
|
||||||
|
self.assertEqual(
|
||||||
|
[mock.call(resources[0]), mock.call(resources[1])],
|
||||||
|
resource_manager.delete.call_args_list
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_runner_dry_run(self):
|
||||||
|
resources = [mock.Mock(), mock.Mock()]
|
||||||
|
resource_manager = mock.Mock(list=mock.Mock(return_value=resources))
|
||||||
|
options = mock.Mock(dry_run=True)
|
||||||
|
exit = mock.Mock(is_set=mock.Mock(return_value=False))
|
||||||
|
|
||||||
|
main.runner(resource_manager, options, exit)
|
||||||
|
|
||||||
|
resource_manager.wait_for_check_prerequisite.assert_not_called()
|
||||||
|
resource_manager.delete.assert_not_called()
|
||||||
|
|
||||||
|
def test_runner_with_unrecoverable_exception(self):
|
||||||
|
resource_manager = mock.Mock(list=mock.Mock(side_effect=Exception))
|
||||||
|
exit = mock.Mock()
|
||||||
|
|
||||||
|
main.runner(resource_manager, mock.Mock(dry_run=True), exit)
|
||||||
|
|
||||||
|
exit.set.assert_called_once_with()
|
||||||
|
|
||||||
|
def test_runner_with_recoverable_exception(self):
|
||||||
|
class MyEndpointNotFound(Exception):
|
||||||
|
pass
|
||||||
|
exc = shade.exc.OpenStackCloudException("")
|
||||||
|
exc.inner_exception = (MyEndpointNotFound, )
|
||||||
|
resource_manager = mock.Mock(list=mock.Mock(side_effect=exc))
|
||||||
|
exit = mock.Mock()
|
||||||
|
|
||||||
|
main.runner(resource_manager, mock.Mock(dry_run=True), exit)
|
||||||
|
|
||||||
|
self.assertFalse(exit.set.called)
|
||||||
|
|
||||||
|
@mock.patch.object(main, 'os_client_config', autospec=True)
|
||||||
|
@mock.patch.object(main, 'shade')
|
||||||
|
@mock.patch('argparse.ArgumentParser.parse_args')
|
||||||
|
@mock.patch('threading.Event', autospec=True)
|
||||||
|
@mock.patch('concurrent.futures.ThreadPoolExecutor', autospec=True)
|
||||||
|
@mock.patch('sys.exit', autospec=True)
|
||||||
|
def test_main(self, m_sys_exit, m_tpe, m_event, m_parse_args, m_shade,
|
||||||
|
m_oscc):
|
||||||
|
m_tpe.return_value.__enter__.return_value.map.side_effect = \
|
||||||
|
KeyboardInterrupt
|
||||||
|
m_parse_args.return_value.purge_own_project = False
|
||||||
|
m_shade.operator_cloud().get_project().enabled = False
|
||||||
|
|
||||||
|
main.main()
|
||||||
|
|
||||||
|
m_oscc.OpenStackConfig.assert_called_once_with()
|
||||||
|
|
||||||
|
m_parse_args.assert_called_once_with()
|
||||||
|
|
||||||
|
self.assertIsInstance(m_tpe.call_args[0][0], int)
|
||||||
|
m_tpe.return_value.__enter__.assert_called_once_with()
|
||||||
|
self.assertEqual(1, m_tpe.return_value.__exit__.call_count)
|
||||||
|
|
||||||
|
executor = m_tpe.return_value.__enter__.return_value
|
||||||
|
self.assertEqual(1, executor.map.call_count)
|
||||||
|
map_args = executor.map.call_args[0]
|
||||||
|
self.assertEqual(True, callable(map_args[0]))
|
||||||
|
for obj in map_args[1]:
|
||||||
|
self.assertIsInstance(obj, ServiceResource)
|
||||||
|
|
||||||
|
m_event.return_value.set.assert_called_once_with()
|
||||||
|
m_event.return_value.is_set.assert_called_once_with()
|
||||||
|
self.assertIsInstance(m_sys_exit.call_args[0][0], int)
|
||||||
|
|
||||||
|
|
||||||
|
@mock.patch.object(main, 'shade')
|
||||||
|
class TestCredentialsManager(unittest.TestCase):
|
||||||
|
def test_init_with_purge_own_project(self, m_shade):
|
||||||
|
_options = types.SimpleNamespace(
|
||||||
|
purge_own_project=True, purge_project=None)
|
||||||
|
creds_mgr = main.CredentialsManager(_options)
|
||||||
|
|
||||||
|
self.assertEqual(_options, creds_mgr.options)
|
||||||
|
self.assertEqual(False, creds_mgr.revoke_role_after_purge)
|
||||||
|
self.assertEqual(False, creds_mgr.disable_project_after_purge)
|
||||||
|
self.assertIsNone(creds_mgr.operator_cloud)
|
||||||
|
|
||||||
|
m_shade.openstack_cloud.assert_called_once_with(argparse=_options)
|
||||||
|
self.assertEqual(m_shade.openstack_cloud.return_value,
|
||||||
|
creds_mgr.cloud)
|
||||||
|
|
||||||
|
self.assertEqual(
|
||||||
|
creds_mgr.cloud.keystone_session.get_user_id(),
|
||||||
|
creds_mgr.user_id
|
||||||
|
)
|
||||||
|
self.assertEqual(
|
||||||
|
creds_mgr.cloud.keystone_session.get_project_id(),
|
||||||
|
creds_mgr.project_id
|
||||||
|
)
|
||||||
|
|
||||||
|
creds_mgr.cloud.cloud_config.get_auth_args.assert_called_once_with()
|
||||||
|
|
||||||
|
@mock.patch.object(utils, 'replace_project_info')
|
||||||
|
def test_init_with_purge_project(self, m_replace, m_shade):
|
||||||
|
_options = types.SimpleNamespace(
|
||||||
|
purge_own_project=False, purge_project=mock.sentinel.purge_project)
|
||||||
|
creds_mgr = main.CredentialsManager(_options)
|
||||||
|
|
||||||
|
m_shade.operator_cloud.assert_called_once_with(argparse=_options)
|
||||||
|
self.assertEqual(m_shade.operator_cloud.return_value,
|
||||||
|
creds_mgr.operator_cloud)
|
||||||
|
|
||||||
|
creds_mgr.operator_cloud.get_project.assert_called_once_with(
|
||||||
|
_options.purge_project)
|
||||||
|
|
||||||
|
self.assertEqual(
|
||||||
|
creds_mgr.operator_cloud.keystone_session.get_user_id.return_value,
|
||||||
|
creds_mgr.user_id
|
||||||
|
)
|
||||||
|
self.assertEqual(
|
||||||
|
creds_mgr.operator_cloud.get_project()['id'],
|
||||||
|
creds_mgr.project_id
|
||||||
|
)
|
||||||
|
self.assertFalse(creds_mgr.disable_project_after_purge)
|
||||||
|
self.assertEqual(
|
||||||
|
m_shade.openstack_cloud.return_value,
|
||||||
|
creds_mgr.cloud
|
||||||
|
)
|
||||||
|
m_replace.assert_called_once_with(
|
||||||
|
creds_mgr.operator_cloud.cloud_config.config,
|
||||||
|
creds_mgr.project_id
|
||||||
|
)
|
||||||
|
creds_mgr.cloud.cloud_config.get_auth_args.assert_called_once_with()
|
||||||
|
|
||||||
|
def test_init_with_project_not_found(self, m_shade):
|
||||||
|
m_shade.operator_cloud.return_value.get_project.return_value = None
|
||||||
|
self.assertRaises(
|
||||||
|
exceptions.OSProjectNotFound,
|
||||||
|
main.CredentialsManager, mock.Mock(purge_own_project=False)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_ensure_role_on_project(self, m_shade):
|
||||||
|
options = mock.Mock(purge_own_project=False)
|
||||||
|
creds_manager = main.CredentialsManager(options)
|
||||||
|
creds_manager.ensure_role_on_project()
|
||||||
|
|
||||||
|
m_shade.operator_cloud.return_value.grant_role.assert_called_once_with(
|
||||||
|
options.admin_role_name, project=options.purge_project,
|
||||||
|
user=mock.ANY)
|
||||||
|
self.assertEqual(True, creds_manager.revoke_role_after_purge)
|
||||||
|
|
||||||
|
# If purge_own_project is not False, we purge our own project
|
||||||
|
# so no need to revoke role after purge
|
||||||
|
creds_manager = main.CredentialsManager(mock.Mock())
|
||||||
|
creds_manager.ensure_role_on_project()
|
||||||
|
self.assertEqual(False, creds_manager.revoke_role_after_purge)
|
||||||
|
|
||||||
|
def test_revoke_role_on_project(self, m_shade):
|
||||||
|
options = mock.Mock(purge_own_project=False)
|
||||||
|
creds_manager = main.CredentialsManager(options)
|
||||||
|
creds_manager.revoke_role_on_project()
|
||||||
|
|
||||||
|
m_shade.operator_cloud().revoke_role.assert_called_once_with(
|
||||||
|
options.admin_role_name, project=options.purge_project,
|
||||||
|
user=mock.ANY)
|
||||||
|
|
||||||
|
def test_ensure_enabled_project(self, m_shade):
|
||||||
|
m_shade.operator_cloud().get_project().enabled = False
|
||||||
|
creds_manager = main.CredentialsManager(
|
||||||
|
mock.Mock(purge_own_project=False))
|
||||||
|
creds_manager.ensure_enabled_project()
|
||||||
|
|
||||||
|
self.assertEqual(True, creds_manager.disable_project_after_purge)
|
||||||
|
m_shade.operator_cloud().update_project.assert_called_once_with(
|
||||||
|
mock.ANY, enabled=True)
|
||||||
|
|
||||||
|
# If project is enabled before purge, no need to disable it after
|
||||||
|
# purge
|
||||||
|
creds_manager = main.CredentialsManager(mock.Mock())
|
||||||
|
creds_manager.ensure_enabled_project()
|
||||||
|
self.assertEqual(False, creds_manager.disable_project_after_purge)
|
||||||
|
self.assertEqual(1, m_shade.operator_cloud().update_project.call_count)
|
||||||
|
|
||||||
|
def test_disable_project(self, m_shade):
|
||||||
|
options = mock.Mock(purge_own_project=False)
|
||||||
|
creds_manager = main.CredentialsManager(options)
|
||||||
|
creds_manager.disable_project()
|
||||||
|
|
||||||
|
m_shade.operator_cloud().update_project.assert_called_once_with(
|
||||||
|
mock.ANY, enabled=False
|
||||||
|
)
|
81
ospurge/tests/test_utils.py
Normal file
81
ospurge/tests/test_utils.py
Normal file
@ -0,0 +1,81 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import logging
|
||||||
|
import typing
|
||||||
|
import unittest
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
|
import shade
|
||||||
|
|
||||||
|
from ospurge.resources.base import ServiceResource
|
||||||
|
from ospurge import utils
|
||||||
|
|
||||||
|
|
||||||
|
class TestUtils(unittest.TestCase):
|
||||||
|
def test_replace_project_info_in_config(self):
|
||||||
|
config = {
|
||||||
|
'cloud': 'foo',
|
||||||
|
'auth': {
|
||||||
|
'project_name': 'bar'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
new_conf = utils.replace_project_info(
|
||||||
|
config, mock.sentinel.project)
|
||||||
|
|
||||||
|
self.assertEqual(new_conf, {
|
||||||
|
'auth': {
|
||||||
|
'project_id': mock.sentinel.project
|
||||||
|
}
|
||||||
|
})
|
||||||
|
self.assertEqual(config, {
|
||||||
|
'cloud': 'foo',
|
||||||
|
'auth': {
|
||||||
|
'project_name': 'bar'
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
def test_get_all_resource_classes(self):
|
||||||
|
classes = utils.get_all_resource_classes()
|
||||||
|
self.assertIsInstance(classes, typing.List)
|
||||||
|
for klass in classes:
|
||||||
|
self.assertTrue(issubclass(klass, ServiceResource))
|
||||||
|
|
||||||
|
def test_call_and_ignore_notfound(self):
|
||||||
|
def raiser():
|
||||||
|
raise shade.exc.OpenStackCloudResourceNotFound("")
|
||||||
|
|
||||||
|
self.assertIsNone(utils.call_and_ignore_notfound(raiser))
|
||||||
|
|
||||||
|
m = mock.Mock()
|
||||||
|
utils.call_and_ignore_notfound(m, 42)
|
||||||
|
self.assertEqual([mock.call(42)], m.call_args_list)
|
||||||
|
|
||||||
|
@mock.patch('logging.getLogger', autospec=True)
|
||||||
|
def test_monkeypatch_oscc_logging_warning(self, mock_getLogger):
|
||||||
|
oscc_target = 'os_client_config.cloud_config'
|
||||||
|
m_oscc_logger, m_other_logger = mock.Mock(), mock.Mock()
|
||||||
|
|
||||||
|
mock_getLogger.side_effect = \
|
||||||
|
lambda m: m_oscc_logger if m == oscc_target else m_other_logger
|
||||||
|
|
||||||
|
@utils.monkeypatch_oscc_logging_warning
|
||||||
|
def f():
|
||||||
|
logging.getLogger(oscc_target).warning("foo")
|
||||||
|
logging.getLogger(oscc_target).warning("!catalog entry not found!")
|
||||||
|
logging.getLogger("other").warning("!catalog entry not found!")
|
||||||
|
|
||||||
|
f()
|
||||||
|
|
||||||
|
self.assertEqual([mock.call.warning('foo'), ],
|
||||||
|
m_oscc_logger.mock_calls)
|
||||||
|
self.assertEqual([mock.call.warning('!catalog entry not found!')],
|
||||||
|
m_other_logger.mock_calls)
|
92
ospurge/utils.py
Normal file
92
ospurge/utils.py
Normal file
@ -0,0 +1,92 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import copy
|
||||||
|
import functools
|
||||||
|
import importlib
|
||||||
|
import logging
|
||||||
|
import pkgutil
|
||||||
|
from typing import Any
|
||||||
|
from typing import Callable
|
||||||
|
from typing import cast
|
||||||
|
from typing import Dict
|
||||||
|
from typing import List
|
||||||
|
from typing import TypeVar
|
||||||
|
|
||||||
|
import shade
|
||||||
|
|
||||||
|
from ospurge.resources import base
|
||||||
|
|
||||||
|
|
||||||
|
def get_all_resource_classes() -> List:
|
||||||
|
"""
|
||||||
|
Import all the modules in the `resources` package and return all the
|
||||||
|
subclasses of the `ServiceResource` Abstract Base Class. This way we can
|
||||||
|
easily extend OSPurge by just adding a new file in the `resources` dir.
|
||||||
|
"""
|
||||||
|
iter_modules = pkgutil.iter_modules(
|
||||||
|
['ospurge/resources'], prefix='ospurge.resources.'
|
||||||
|
)
|
||||||
|
for (_, name, ispkg) in iter_modules:
|
||||||
|
if not ispkg:
|
||||||
|
importlib.import_module(name)
|
||||||
|
|
||||||
|
return base.ServiceResource.__subclasses__()
|
||||||
|
|
||||||
|
|
||||||
|
F = TypeVar('F', bound=Callable[..., Any])
|
||||||
|
|
||||||
|
|
||||||
|
def monkeypatch_oscc_logging_warning(f: F) -> F:
|
||||||
|
"""
|
||||||
|
Monkey-patch logging.warning() method to silence 'os_client_config' when
|
||||||
|
it complains that a Keystone catalog entry is not found. This warning
|
||||||
|
benignly happens when, for instance, we try to cleanup a Neutron resource
|
||||||
|
but Neutron is not available on the target cloud environment.
|
||||||
|
"""
|
||||||
|
oscc_target = 'os_client_config.cloud_config'
|
||||||
|
orig_logging = logging.getLogger(oscc_target).warning
|
||||||
|
|
||||||
|
def logging_warning(msg: str, *args: Any, **kwargs: Any) -> None:
|
||||||
|
if 'catalog entry not found' not in msg:
|
||||||
|
orig_logging(msg, *args, **kwargs)
|
||||||
|
|
||||||
|
@functools.wraps(f)
|
||||||
|
def wrapper(*args: list, **kwargs: dict) -> Any:
|
||||||
|
try:
|
||||||
|
setattr(logging.getLogger(oscc_target), 'warning', logging_warning)
|
||||||
|
return f(*args, **kwargs)
|
||||||
|
finally:
|
||||||
|
setattr(logging.getLogger(oscc_target), 'warning', orig_logging)
|
||||||
|
|
||||||
|
return cast(F, wrapper)
|
||||||
|
|
||||||
|
|
||||||
|
def call_and_ignore_notfound(f: Callable, *args: List) -> None:
|
||||||
|
try:
|
||||||
|
f(*args)
|
||||||
|
except shade.exc.OpenStackCloudResourceNotFound:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def replace_project_info(config: Dict, new_project_id: str) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Replace all tenant/project info in a `os_client_config` config dict with
|
||||||
|
a new project. This is used to bind/scope to another project.
|
||||||
|
"""
|
||||||
|
new_conf = copy.deepcopy(config)
|
||||||
|
new_conf.pop('cloud', None)
|
||||||
|
new_conf['auth'].pop('project_name', None)
|
||||||
|
new_conf['auth'].pop('project_id', None)
|
||||||
|
|
||||||
|
new_conf['auth']['project_id'] = new_project_id
|
||||||
|
|
||||||
|
return new_conf
|
@ -1,10 +1,3 @@
|
|||||||
requests>=2.10.0 # Apache-2.0
|
os-client-config>=1.22.0 # Apache-2.0
|
||||||
keystoneauth1>=2.14.0 # Apache-2.0
|
pbr>=1.8 # Apache-2.0
|
||||||
python-ceilometerclient>=2.5.0 # Apache-2.0
|
shade>=1.13.1
|
||||||
python-cinderclient>=1.6.0,!=1.7.0,!=1.7.1 # Apache-2.0
|
|
||||||
python-glanceclient>=2.5.0 # Apache-2.0
|
|
||||||
python-heatclient>=1.5.0 # Apache-2.0
|
|
||||||
python-keystoneclient>=3.6.0 # Apache-2.0
|
|
||||||
python-neutronclient>=5.1.0 # Apache-2.0
|
|
||||||
python-novaclient>=2.29.0,!=2.33.0 # Apache-2.0
|
|
||||||
python-swiftclient>=2.2.0 # Apache-2.0
|
|
||||||
|
24
setup.cfg
24
setup.cfg
@ -1,11 +1,10 @@
|
|||||||
[metadata]
|
[metadata]
|
||||||
name = ospurge
|
name = ospurge
|
||||||
author = Florent Flament
|
author = The OSPurge contributors
|
||||||
author-email = florent.flament@cloudwatt.com
|
|
||||||
home-page = https://github.com/openstack/ospurge
|
home-page = https://github.com/openstack/ospurge
|
||||||
summary = OpenStack resources cleanup script
|
summary = OpenStack resources cleanup script
|
||||||
description-file = README.rst
|
description-file = README.rst
|
||||||
license = MIT
|
license = Apache-2
|
||||||
classifier =
|
classifier =
|
||||||
Development Status :: 5 - Production/Stable
|
Development Status :: 5 - Production/Stable
|
||||||
Environment :: Console
|
Environment :: Console
|
||||||
@ -13,16 +12,27 @@ classifier =
|
|||||||
Intended Audience :: Developers
|
Intended Audience :: Developers
|
||||||
Intended Audience :: Information Technology
|
Intended Audience :: Information Technology
|
||||||
Intended Audience :: System Administrators
|
Intended Audience :: System Administrators
|
||||||
License :: OSI Approved :: MIT License
|
License :: OSI Approved :: Apache Software License
|
||||||
Programming Language :: Python
|
Operating System :: POSIX :: Linux
|
||||||
Programming Language :: Python :: 2.7
|
Programming Language :: Python :: 3
|
||||||
|
Programming Language :: Python :: 3.5
|
||||||
keywords =
|
keywords =
|
||||||
openstack
|
openstack
|
||||||
|
|
||||||
[entry_points]
|
[entry_points]
|
||||||
console_scripts =
|
console_scripts =
|
||||||
ospurge = ospurge.client:main
|
ospurge = ospurge.main:main
|
||||||
|
|
||||||
[files]
|
[files]
|
||||||
packages =
|
packages =
|
||||||
ospurge
|
ospurge
|
||||||
|
|
||||||
|
[build_sphinx]
|
||||||
|
all_files = 1
|
||||||
|
build-dir = doc/build
|
||||||
|
source-dir = doc/source
|
||||||
|
|
||||||
|
[pbr]
|
||||||
|
# Treat sphinx warnings as errors during the docs build; this helps us keep
|
||||||
|
# the documentation clean.
|
||||||
|
warnerrors = True
|
||||||
|
13
setup.py
13
setup.py
@ -1,6 +1,17 @@
|
|||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
import setuptools
|
import setuptools
|
||||||
|
|
||||||
setuptools.setup(
|
setuptools.setup(
|
||||||
setup_requires=['pbr'],
|
setup_requires=['pbr>=1.8'],
|
||||||
pbr=True,
|
pbr=True,
|
||||||
)
|
)
|
||||||
|
@ -1,9 +1,8 @@
|
|||||||
bashate>=0.2 # Apache-2.0
|
bashate>=0.2 # Apache-2.0
|
||||||
|
coverage>=4.0 # Apache-2.0
|
||||||
|
doc8 # Apache-2.0
|
||||||
hacking>=0.12.0,<0.13 # Apache-2.0
|
hacking>=0.12.0,<0.13 # Apache-2.0
|
||||||
httpretty
|
mypy-lang
|
||||||
testtools>=1.4.0 # MIT
|
openstackdocstheme>=1.5.0 # Apache-2.0
|
||||||
nose # LGPL
|
|
||||||
sphinx>=1.2.1,!=1.3b1,<1.4 # BSD
|
sphinx>=1.2.1,!=1.3b1,<1.4 # BSD
|
||||||
testrepository>=0.0.18 # Apache-2.0/BSD
|
testrepository>=0.0.18 # Apache-2.0/BSD
|
||||||
doc8 # Apache-2.0
|
|
||||||
openstackdocstheme>=1.5.0 # Apache-2.0
|
|
2
tools/.gitignore
vendored
2
tools/.gitignore
vendored
@ -1,2 +0,0 @@
|
|||||||
dummy_stack.yaml
|
|
||||||
zero_disk.raw
|
|
135
tools/func-tests.sh
Executable file
135
tools/func-tests.sh
Executable file
@ -0,0 +1,135 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
# Be strict (but not too much: '-u' doesn't always play nice with devstack)
|
||||||
|
set -eo pipefail
|
||||||
|
|
||||||
|
readonly PROGDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
|
||||||
|
function assert_compute {
|
||||||
|
if [[ $(nova list | wc -l) -lt 5 ]]; then
|
||||||
|
echo "Less than one VM, someone cleaned our VM :("
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
function assert_network {
|
||||||
|
# We expect at least 1 "" (free), 1 "compute:",
|
||||||
|
# 1 "network:router_interface" and 1 "network:dhcp" ports
|
||||||
|
if [[ $(neutron port-list | wc -l) -lt 8 ]]; then
|
||||||
|
echo "Less than 4 ports, someone cleaned our ports :("
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# We expect at least 2 security groups (default + one created by populate)
|
||||||
|
if [[ $(openstack security group list | wc -l) -lt 6 ]]; then
|
||||||
|
echo "Less than 2 security groups, someone cleaned our sec-groups :("
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ $(openstack floating ip list | wc -l) -lt 5 ]]; then
|
||||||
|
echo "Less than one floating ip, someone cleaned our FIP :("
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function assert_volume {
|
||||||
|
if [[ $(openstack volume backup list | wc -l) -lt 5 ]]; then
|
||||||
|
echo "Less than one volume backup, someone cleaned our backup:("
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
########################
|
||||||
|
### Pre check
|
||||||
|
########################
|
||||||
|
source ~/devstack/openrc admin admin
|
||||||
|
if [[ ! "$(openstack flavor list)" =~ 'm1.nano' ]]; then
|
||||||
|
openstack flavor create --id 42 --ram 64 --disk 0 --vcpus 1 m1.nano
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
########################
|
||||||
|
### Populate
|
||||||
|
########################
|
||||||
|
pid=()
|
||||||
|
|
||||||
|
(source ~/devstack/openrc admin admin && ${PROGDIR}/populate.sh) &
|
||||||
|
pid+=($!)
|
||||||
|
|
||||||
|
(source ~/devstack/openrc demo demo && ${PROGDIR}/populate.sh) &
|
||||||
|
pid+=($!)
|
||||||
|
|
||||||
|
(source ~/devstack/openrc demo invisible_to_admin && ${PROGDIR}/populate.sh) &
|
||||||
|
pid+=($!)
|
||||||
|
|
||||||
|
(source ~/devstack/openrc alt_demo alt_demo && ${PROGDIR}/populate.sh) &
|
||||||
|
pid+=($!)
|
||||||
|
|
||||||
|
for i in ${!pid[@]}; do
|
||||||
|
wait ${pid[i]}
|
||||||
|
if [[ $? -ne 0 ]]; then
|
||||||
|
echo "One of the 'populate.sh' execution failed."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
unset "pid[$i]"
|
||||||
|
done
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
########################
|
||||||
|
### Cleanup
|
||||||
|
########################
|
||||||
|
tox -e run -- --os-cloud devstack-admin --purge-own-project --verbose # purges admin/admin
|
||||||
|
|
||||||
|
source ~/devstack/openrc demo demo
|
||||||
|
assert_compute && assert_network && assert_volume
|
||||||
|
|
||||||
|
tox -e run -- --os-cloud devstack --purge-own-project --verbose # purges demo/demo
|
||||||
|
|
||||||
|
source ~/devstack/openrc demo invisible_to_admin
|
||||||
|
assert_compute && assert_network && assert_volume
|
||||||
|
|
||||||
|
tox -e run -- --os-auth-url http://localhost/identity_admin --os-username demo --os-project-name invisible_to_admin --os-password testtest --purge-own-project --verbose
|
||||||
|
|
||||||
|
source ~/devstack/openrc alt_demo alt_demo
|
||||||
|
assert_compute && assert_network && assert_volume
|
||||||
|
|
||||||
|
source ~/devstack/openrc admin admin
|
||||||
|
openstack project set --disable alt_demo
|
||||||
|
tox -e run -- --os-auth-url http://localhost/identity_admin --os-username admin --os-project-name admin --os-password testtest --purge-project alt_demo --verbose
|
||||||
|
openstack project set --enable alt_demo
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
########################
|
||||||
|
### Final assertion
|
||||||
|
########################
|
||||||
|
if [[ $(nova list --all-tenants --minimal | wc -l) -ne 4 ]]; then
|
||||||
|
echo "Not all VMs were cleaned up"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ $(neutron port-list | wc -l) -ne 1 ]]; then # This also checks FIP
|
||||||
|
echo "Not all ports were cleaned up"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ $( cinder backup-list --all-tenants | wc -l) -ne 4 ]]; then
|
||||||
|
echo "Not all volume backups were cleaned up"
|
||||||
|
exit 1
|
||||||
|
fi
|
@ -1,260 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# This script populates the project set in the environment variable
|
|
||||||
# OS_TENANT_NAME with various resources. The purpose is to test
|
|
||||||
# ospurge.
|
|
||||||
|
|
||||||
# Be strict
|
|
||||||
set -xue
|
|
||||||
set -o pipefail
|
|
||||||
|
|
||||||
# Check if needed environment variable OS_TENANT_NAME is set and non-empty.
|
|
||||||
: "${OS_TENANT_NAME:?Need to set OS_TENANT_NAME non-empty}"
|
|
||||||
|
|
||||||
TOP_DIR=$(cd $(dirname "$0") && pwd)
|
|
||||||
source $TOP_DIR/utils.bash
|
|
||||||
|
|
||||||
UUID=$(cat /proc/sys/kernel/random/uuid)
|
|
||||||
|
|
||||||
# Name of external network
|
|
||||||
EXTNET_NAME=${EXTNET_NAME:-public}
|
|
||||||
# Name of flavor used to spawn a VM
|
|
||||||
FLAVOR=${FLAVOR:-m1.small}
|
|
||||||
# Image used for the VM
|
|
||||||
VMIMG_NAME=${VMIMG_NAME:-cirros-0.3.4-x86_64-uec}
|
|
||||||
|
|
||||||
|
|
||||||
################################
|
|
||||||
### Check resources exist
|
|
||||||
### Do that early to fail early
|
|
||||||
################################
|
|
||||||
# Retrieving external network ID
|
|
||||||
EXTNET_ID=$(neutron net-show $EXTNET_NAME | awk '/ id /{print $4}')
|
|
||||||
exit_if_empty "$EXTNET_ID" "Unable to retrieve ID of external network $EXTNET_NAME"
|
|
||||||
|
|
||||||
exit_if_empty "$(nova flavor-list | grep $FLAVOR)" "Flavor $FLAVOR is unknown to Nova"
|
|
||||||
|
|
||||||
# Looking for the $VMIMG_NAME image and getting its ID
|
|
||||||
IMAGE_ID=$(nova image-list | awk "/ $VMIMG_NAME /{print \$2}")
|
|
||||||
exit_if_empty "$IMAGE_ID" "Image $VMIMG_NAME could not be found"
|
|
||||||
|
|
||||||
|
|
||||||
KEY_NAME="ospurge_test_key_$UUID"
|
|
||||||
NET_NAME="ospurge_test_net_$UUID"
|
|
||||||
SUBNET_NAME="ospurge_test_subnet_$UUID"
|
|
||||||
ROUT_NAME="ospurge_test_rout_$UUID"
|
|
||||||
VM_NAME="ospurge_test_vm_$UUID"
|
|
||||||
VMSNAP_NAME="ospurge_test_vmsnap_$UUID"
|
|
||||||
VOL_NAME="ospurge_test_vol_$UUID"
|
|
||||||
VOLSNAP_NAME="ospurge_test_volsnap_$UUID"
|
|
||||||
VOLBACK_NAME="ospurge_test_volback_$UUID"
|
|
||||||
IMG_NAME="ospurge_test_image_$UUID"
|
|
||||||
SECGRP_NAME="ospurge_test_secgroup_$UUID"
|
|
||||||
CONT_NAME="ospurge_test_container_$UUID"
|
|
||||||
FLAV_NAME="ospurge_test_flavor_$UUID"
|
|
||||||
STACK_NAME="ospurge_test_stack_$UUID"
|
|
||||||
ALARM_NAME="ospurge_test_alarm_$UUID"
|
|
||||||
FW_NAME="ospurge_test_firewall_$UUID"
|
|
||||||
FW_POLICY_NAME="ospurge_test_policy_$UUID"
|
|
||||||
FW_RULE_NAME="ospurge_test_rule_$UUID"
|
|
||||||
LB_POOL_NAME="ospurge_test_pool_$UUID"
|
|
||||||
LB_VIP_NAME="ospurge_test_vip_$UUID"
|
|
||||||
LB_MEMBER_NAME="ospurge_test_member_$UUID"
|
|
||||||
METER_NAME="ospurge_test_meter_$UUID"
|
|
||||||
|
|
||||||
# Create a file that will be used to populate Glance and Swift
|
|
||||||
dd if="/dev/zero" of="zero_disk.raw" bs=1M count=5
|
|
||||||
|
|
||||||
|
|
||||||
###############################
|
|
||||||
### Swift
|
|
||||||
###############################
|
|
||||||
swift upload $CONT_NAME zero_disk.raw
|
|
||||||
exit_on_failure "Unable to upload file in container $CONT_NAME"
|
|
||||||
|
|
||||||
|
|
||||||
###############################
|
|
||||||
### Cinder
|
|
||||||
###############################
|
|
||||||
# Create a volume
|
|
||||||
cinder create --display-name $VOL_NAME 5
|
|
||||||
exit_on_failure "Unable to create volume"
|
|
||||||
|
|
||||||
# Getting ID of volume
|
|
||||||
VOL_ID=$(cinder show $VOL_NAME | awk '/ id /{print $4}')
|
|
||||||
exit_if_empty "$VOL_ID" "Unable to retrieve ID of volume $VOL_NAME"
|
|
||||||
|
|
||||||
# Snapshotting volume (note that it has to be detached, unless using --force)
|
|
||||||
cinder snapshot-create --display-name $VOLSNAP_NAME $VOL_ID
|
|
||||||
exit_on_failure "Unable to snapshot volume $VOL_NAME"
|
|
||||||
|
|
||||||
# Backuping volume
|
|
||||||
# Don't exit if this fails - as we may test platforms that don't
|
|
||||||
# provide this feature
|
|
||||||
cinder backup-create --display-name $VOLBACK_NAME $VOL_ID || true
|
|
||||||
|
|
||||||
|
|
||||||
###############################
|
|
||||||
### Neutron
|
|
||||||
###############################
|
|
||||||
# Create a private network and check it exists
|
|
||||||
neutron net-create $NET_NAME
|
|
||||||
exit_on_failure "Creation of network $NET_NAME failed"
|
|
||||||
|
|
||||||
# Getting ID of private network
|
|
||||||
NET_ID=$(neutron net-show $NET_NAME | awk '/ id /{print $4}')
|
|
||||||
exit_if_empty "$NET_ID" "Unable to retrieve ID of network $NET_NAME"
|
|
||||||
|
|
||||||
# Add network's subnet
|
|
||||||
neutron subnet-create --name $SUBNET_NAME $NET_ID 192.168.0.0/24
|
|
||||||
exit_on_failure "Unable to create subnet $SUBNET_NAME for network $NET_ID"
|
|
||||||
|
|
||||||
# Create an unused port
|
|
||||||
neutron port-create $NET_ID
|
|
||||||
|
|
||||||
# retrieving subnet ID
|
|
||||||
SUBNET_ID=$(neutron subnet-show $SUBNET_NAME | awk '/ id /{print $4}')
|
|
||||||
exit_if_empty "$SUBNET_ID" "Unable to retrieve ID of subnet $SUBNET_NAME"
|
|
||||||
|
|
||||||
# Creating a router
|
|
||||||
neutron router-create $ROUT_NAME
|
|
||||||
exit_on_failure "Unable to create router $ROUT_NAME"
|
|
||||||
|
|
||||||
# Retrieving router ID
|
|
||||||
ROUT_ID=$(neutron router-show $ROUT_NAME | awk '/ id /{print $4}')
|
|
||||||
exit_if_empty "$ROUT_ID" "Unable to retrieve ID of router $ROUT_NAME"
|
|
||||||
|
|
||||||
# Setting router's gateway
|
|
||||||
neutron router-gateway-set $ROUT_ID $EXTNET_ID
|
|
||||||
exit_on_failure "Unable to set gateway to router $ROUT_NAME"
|
|
||||||
|
|
||||||
# Plugging router on internal network
|
|
||||||
neutron router-interface-add $ROUT_ID $SUBNET_ID
|
|
||||||
exit_on_failure "Unable to add interface on subnet $SUBNET_NAME to router $ROUT_NAME"
|
|
||||||
|
|
||||||
# Creating a floating IP and retrieving its IP Address
|
|
||||||
|
|
||||||
FIP_ADD=$(neutron floatingip-create $EXTNET_NAME | awk '/ floating_ip_address /{print $4}')
|
|
||||||
exit_if_empty "$FIP_ADD" "Unable to create or retrieve floating IP"
|
|
||||||
|
|
||||||
# Creating a security group
|
|
||||||
neutron security-group-create $SECGRP_NAME
|
|
||||||
exit_on_failure "Unable to create security group $SECGRP_NAME"
|
|
||||||
|
|
||||||
# Getting security group ID
|
|
||||||
SECGRP_ID=$(neutron security-group-show $SECGRP_NAME | awk '/ id /{print $4}')
|
|
||||||
exit_if_empty "$SECGRP_ID" "Unable to retrieve ID of security group $SECGRP_NAME"
|
|
||||||
|
|
||||||
# Adding a rule to previously created security group
|
|
||||||
|
|
||||||
neutron security-group-rule-create --direction ingress --protocol TCP \
|
|
||||||
--port-range-min 22 --port-range-max 22 --remote-ip-prefix 0.0.0.0/0 \
|
|
||||||
$SECGRP_ID
|
|
||||||
|
|
||||||
# Creating a firewall rule
|
|
||||||
# Don't exit if this fails - as we may test platforms that don't
|
|
||||||
# provide this feature
|
|
||||||
neutron firewall-rule-create --name $FW_RULE_NAME --protocol tcp --action allow --destination-port 80 || true
|
|
||||||
|
|
||||||
# Creating a firewall policy
|
|
||||||
# Don't exit if this fails - as we may test platforms that don't
|
|
||||||
# provide this feature
|
|
||||||
neutron firewall-policy-create --firewall-rules "$FW_RULE_NAME" $FW_POLICY_NAME || true
|
|
||||||
|
|
||||||
# Creating a firewall
|
|
||||||
# Don't exit if this fails - as we may test platforms that don't
|
|
||||||
# provide this feature
|
|
||||||
neutron firewall-create --name $FW_NAME $FW_POLICY_NAME || true
|
|
||||||
|
|
||||||
# Creating a loadbalancer pool
|
|
||||||
# Don't exit if this fails - as we may test platforms that don't
|
|
||||||
# provide this feature
|
|
||||||
neutron lb-pool-create --lb-method ROUND_ROBIN --name $LB_POOL_NAME --protocol HTTP --subnet-id $SUBNET_ID || true
|
|
||||||
|
|
||||||
# Creating a loadbalancer VIP address
|
|
||||||
# Don't exit if this fails - as we may test platforms that don't
|
|
||||||
# provide this feature
|
|
||||||
neutron lb-vip-create --name $LB_VIP_NAME --protocol-port 80 --protocol HTTP --subnet-id $SUBNET_ID $LB_POOL_NAME || true
|
|
||||||
|
|
||||||
# Creating a loadbalancer member
|
|
||||||
neutron lb-member-create --address 192.168.0.153 --protocol-port 80 $LB_POOL_NAME || true
|
|
||||||
|
|
||||||
# Creating a loadbalancer health monitor
|
|
||||||
# Don't exit if this fails - as we may test platforms that don't
|
|
||||||
# provide this feature
|
|
||||||
neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 || true
|
|
||||||
|
|
||||||
# Creating a metering label
|
|
||||||
# Don't exit if this fails - as we may test platforms that don't
|
|
||||||
# provide this feature
|
|
||||||
neutron meter-label-create $METER_NAME || true
|
|
||||||
|
|
||||||
###############################
|
|
||||||
### Nova
|
|
||||||
###############################
|
|
||||||
# Launch a VM
|
|
||||||
nova boot --flavor $FLAVOR --image $IMAGE_ID --nic net-id=$NET_ID $VM_NAME
|
|
||||||
exit_on_failure "Unable to boot VM $VM_NAME"
|
|
||||||
|
|
||||||
# Getting ID of VM
|
|
||||||
VM_ID=$(nova show $VM_NAME | awk '/ id /{print $4}')
|
|
||||||
exit_if_empty "$VM_ID" "Unable to retrieve ID of VM $VM_NAME"
|
|
||||||
|
|
||||||
|
|
||||||
###############################
|
|
||||||
### Glance
|
|
||||||
###############################
|
|
||||||
# Upload glance image
|
|
||||||
glance image-create --name $IMG_NAME --disk-format raw \
|
|
||||||
--container-format bare --file zero_disk.raw
|
|
||||||
exit_on_failure "Unable to create Glance iamge $IMG_NAME"
|
|
||||||
|
|
||||||
|
|
||||||
###############################
|
|
||||||
### Heat
|
|
||||||
###############################
|
|
||||||
echo 'heat_template_version: 2013-05-23
|
|
||||||
description: >
|
|
||||||
Hello world HOT template' > dummy_stack.yaml
|
|
||||||
# Don't exit if this fails - as we may test platforms that don't
|
|
||||||
# provide this feature
|
|
||||||
heat stack-create -f dummy_stack.yaml $STACK_NAME || true
|
|
||||||
|
|
||||||
|
|
||||||
# Wait for VM to be spawned before snapshotting the VM
|
|
||||||
VM_STATUS=$(nova show $VM_ID | awk '/ status /{print $4}')
|
|
||||||
while [ $VM_STATUS != "ACTIVE" ]; do
|
|
||||||
echo "Status of VM $VM_NAME is $VM_STATUS. Waiting 1 sec"
|
|
||||||
sleep 1
|
|
||||||
VM_STATUS=$(nova show $VM_ID | awk '/ status /{print $4}')
|
|
||||||
done
|
|
||||||
|
|
||||||
### Link resources
|
|
||||||
|
|
||||||
# Associate floating IP
|
|
||||||
nova floating-ip-associate $VM_ID $FIP_ADD
|
|
||||||
exit_on_failure "Unable to associate floating IP $FIP_ADD to VM $VM_NAME"
|
|
||||||
|
|
||||||
# Wait for volume to be available
|
|
||||||
VOL_STATUS=$(cinder show $VOL_ID | awk '/ status /{print $4}')
|
|
||||||
while [ $VOL_STATUS != "available" ]; do
|
|
||||||
echo "Status of volume $VOL_NAME is $VOL_STATUS. Waiting 1 sec"
|
|
||||||
sleep 1
|
|
||||||
VOL_STATUS=$(cinder show $VOL_ID | awk '/ status /{print $4}')
|
|
||||||
done
|
|
||||||
|
|
||||||
# Attach volume
|
|
||||||
# This must be done before instance snapshot otherwise we could run into
|
|
||||||
# ERROR (Conflict): Cannot 'attach_volume' while instance is in task_state
|
|
||||||
# image_pending_upload
|
|
||||||
nova volume-attach $VM_ID $VOL_ID
|
|
||||||
exit_on_failure "Unable to attach volume $VOL_ID to VM $VM_ID"
|
|
||||||
|
|
||||||
# Create an image
|
|
||||||
nova image-create $VM_ID $VMSNAP_NAME
|
|
||||||
exit_on_failure "Unable to create VM Snapshot of $VM_NAME"
|
|
||||||
|
|
||||||
# Create a ceilometer alarm
|
|
||||||
# Don't exit if this fails - as we may test platforms that don't
|
|
||||||
# provide this feature
|
|
||||||
ceilometer alarm-create --name $ALARM_NAME --meter-name cpu_util --threshold 70.0 || true
|
|
199
tools/populate.sh
Executable file
199
tools/populate.sh
Executable file
@ -0,0 +1,199 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
# This script populates the project set in the environment variable
|
||||||
|
# OS_PROJECT_NAME with various resources. The purpose is to test
|
||||||
|
# ospurge.
|
||||||
|
|
||||||
|
# Be strict
|
||||||
|
set -ueo pipefail
|
||||||
|
|
||||||
|
function exit_on_failure {
|
||||||
|
RET_CODE=$?
|
||||||
|
ERR_MSG=$1
|
||||||
|
if [ ${RET_CODE} -ne 0 ]; then
|
||||||
|
echo $ERR_MSG
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function exit_if_empty {
|
||||||
|
STRING=${1:-}
|
||||||
|
ERR_MSG=${2:-}
|
||||||
|
if [ -z "$STRING" ]; then
|
||||||
|
echo $ERR_MSG
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function cleanup {
|
||||||
|
if [[ -f "${UUID}.raw" ]]; then
|
||||||
|
rm "${UUID}.raw"
|
||||||
|
fi
|
||||||
|
|
||||||
|
}
|
||||||
|
# Check if needed environment variable OS_PROJECT_NAME is set and non-empty.
|
||||||
|
: "${OS_PROJECT_NAME:?Need to set OS_PROJECT_NAME non-empty}"
|
||||||
|
|
||||||
|
# Some random UUID + Unicode characters
|
||||||
|
UUID="♫$(cat /proc/sys/kernel/random/uuid)✓"
|
||||||
|
# Name of external network
|
||||||
|
EXTNET_NAME=${EXTNET_NAME:-public}
|
||||||
|
# Name of flavor used to spawn a VM
|
||||||
|
FLAVOR=${FLAVOR:-m1.nano}
|
||||||
|
# Image used for the VM
|
||||||
|
VMIMG_NAME=${VMIMG_NAME:-cirros-0.3.4-x86_64-uec}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
################################
|
||||||
|
### Check resources exist
|
||||||
|
### Do that early to fail early
|
||||||
|
################################
|
||||||
|
# Retrieve external network ID
|
||||||
|
EXTNET_ID=$(neutron net-show $EXTNET_NAME | awk '/ id /{print $4}')
|
||||||
|
exit_if_empty "$EXTNET_ID" "Unable to retrieve ID of external network $EXTNET_NAME"
|
||||||
|
|
||||||
|
exit_if_empty "$(nova flavor-list | grep ${FLAVOR})" "Flavor $FLAVOR is unknown to Nova"
|
||||||
|
|
||||||
|
# Look for the $VMIMG_NAME image and get its ID
|
||||||
|
IMAGE_ID=$(openstack image list | awk "/ $VMIMG_NAME /{print \$2}")
|
||||||
|
exit_if_empty "$IMAGE_ID" "Image $VMIMG_NAME could not be found"
|
||||||
|
|
||||||
|
# Create a file that will be used to populate Glance and Swift
|
||||||
|
dd if="/dev/zero" of="${UUID}.raw" bs=1M count=5
|
||||||
|
trap cleanup SIGHUP SIGINT SIGTERM EXIT
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
###############################
|
||||||
|
### Cinder
|
||||||
|
###############################
|
||||||
|
# Create a volume
|
||||||
|
VOL_ID=$(cinder create --display-name ${UUID} 1 | awk '/ id /{print $4}')
|
||||||
|
exit_on_failure "Unable to create volume"
|
||||||
|
exit_if_empty "$VOL_ID" "Unable to retrieve ID of volume ${UUID}"
|
||||||
|
|
||||||
|
# Snapshot the volume (note that it has to be detached, unless using --force)
|
||||||
|
cinder snapshot-create --display-name ${UUID} $VOL_ID
|
||||||
|
exit_on_failure "Unable to snapshot volume ${UUID}"
|
||||||
|
|
||||||
|
# Backup volume
|
||||||
|
# Don't exit on failure as Cinder Backup is not available on all clouds
|
||||||
|
cinder backup-create --display-name ${UUID} $VOL_ID || true
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
###############################
|
||||||
|
### Neutron
|
||||||
|
###############################
|
||||||
|
# Create a private network and check it exists
|
||||||
|
NET_ID=$(neutron net-create ${UUID} | awk '/ id /{print $4}')
|
||||||
|
exit_on_failure "Creation of network ${UUID} failed"
|
||||||
|
exit_if_empty "$NET_ID" "Unable to retrieve ID of network ${UUID}"
|
||||||
|
|
||||||
|
# Add network's subnet
|
||||||
|
SUBNET_ID=$(neutron subnet-create --name ${UUID} $NET_ID 192.168.0.0/24 | awk '/ id /{print $4}')
|
||||||
|
exit_on_failure "Unable to create subnet ${UUID} for network $NET_ID"
|
||||||
|
exit_if_empty "$SUBNET_ID" "Unable to retrieve ID of subnet ${UUID}"
|
||||||
|
|
||||||
|
# Create an unused port
|
||||||
|
neutron port-create $NET_ID
|
||||||
|
|
||||||
|
# Create a router
|
||||||
|
ROUT_ID=$(neutron router-create ${UUID} | awk '/ id /{print $4}')
|
||||||
|
exit_on_failure "Unable to create router ${UUID}"
|
||||||
|
exit_if_empty "$ROUT_ID" "Unable to retrieve ID of router ${UUID}"
|
||||||
|
|
||||||
|
# Set router's gateway
|
||||||
|
neutron router-gateway-set $ROUT_ID $EXTNET_ID
|
||||||
|
exit_on_failure "Unable to set gateway to router ${UUID}"
|
||||||
|
|
||||||
|
# Connect router on internal network
|
||||||
|
neutron router-interface-add $ROUT_ID $SUBNET_ID
|
||||||
|
exit_on_failure "Unable to add interface on subnet ${UUID} to router ${UUID}"
|
||||||
|
|
||||||
|
# Create a floating IP and retrieve its IP Address
|
||||||
|
FIP_ADD=$(neutron floatingip-create $EXTNET_NAME | awk '/ floating_ip_address /{print $4}')
|
||||||
|
exit_if_empty "$FIP_ADD" "Unable to create or retrieve floating IP"
|
||||||
|
|
||||||
|
# Create a security group
|
||||||
|
SECGRP_ID=$(neutron security-group-create ${UUID} | awk '/ id /{print $4}')
|
||||||
|
exit_on_failure "Unable to create security group ${UUID}"
|
||||||
|
exit_if_empty "$SECGRP_ID" "Unable to retrieve ID of security group ${UUID}"
|
||||||
|
|
||||||
|
# Add a rule to previously created security group
|
||||||
|
neutron security-group-rule-create --direction ingress --protocol TCP \
|
||||||
|
--port-range-min 22 --port-range-max 22 --remote-ip-prefix 0.0.0.0/0 $SECGRP_ID
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
###############################
|
||||||
|
### Nova
|
||||||
|
###############################
|
||||||
|
# Launch a VM
|
||||||
|
VM_ID=$(nova boot --flavor $FLAVOR --image $IMAGE_ID --nic net-id=$NET_ID ${UUID} | awk '/ id /{print $4}')
|
||||||
|
exit_on_failure "Unable to boot VM ${UUID}"
|
||||||
|
exit_if_empty "$VM_ID" "Unable to retrieve ID of VM ${UUID}"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
###############################
|
||||||
|
### Glance
|
||||||
|
###############################
|
||||||
|
# Upload glance image
|
||||||
|
glance image-create --name ${UUID} --disk-format raw --container-format bare --file ${UUID}.raw
|
||||||
|
exit_on_failure "Unable to create Glance iamge ${UUID}"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
###############################
|
||||||
|
### Swift
|
||||||
|
###############################
|
||||||
|
# Don't exit on failure as Swift is not available on all clouds
|
||||||
|
swift upload ${UUID} ${UUID}.raw || true
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
###############################
|
||||||
|
### Link resources
|
||||||
|
###############################
|
||||||
|
# Wait for volume to be available
|
||||||
|
VOL_STATUS=$(cinder show $VOL_ID | awk '/ status /{print $4}')
|
||||||
|
while [ $VOL_STATUS != "available" ]; do
|
||||||
|
echo "Status of volume ${UUID} is $VOL_STATUS. Waiting 3 sec"
|
||||||
|
sleep 3
|
||||||
|
VOL_STATUS=$(cinder show $VOL_ID | awk '/ status /{print $4}')
|
||||||
|
done
|
||||||
|
|
||||||
|
# Wait for VM to be active
|
||||||
|
VM_STATUS=$(nova show --minimal $VM_ID | awk '/ status /{print $4}')
|
||||||
|
while [ $VM_STATUS != "ACTIVE" ]; do
|
||||||
|
echo "Status of VM ${UUID} is $VM_STATUS. Waiting 3 sec"
|
||||||
|
sleep 3
|
||||||
|
VM_STATUS=$(nova show --minimal $VM_ID | awk '/ status /{print $4}')
|
||||||
|
done
|
||||||
|
|
||||||
|
# Attach volume
|
||||||
|
# This must be done before instance snapshot otherwise we could run into
|
||||||
|
# ERROR (Conflict): Cannot 'attach_volume' while instance is in task_state
|
||||||
|
# image_pending_upload
|
||||||
|
nova volume-attach $VM_ID $VOL_ID
|
||||||
|
exit_on_failure "Unable to attach volume $VOL_ID to VM $VM_ID"
|
||||||
|
|
||||||
|
# Associate floating IP
|
||||||
|
# It as far away from the network creation as possible, because associating
|
||||||
|
# a FIP requires the network to be 'UP' (which could take several secs)
|
||||||
|
# See https://github.com/openstack/nova/blob/1a30fda13ae78f4e40b848cacbf6278a359a91cb/nova/api/openstack/compute/floating_ips.py#L229
|
||||||
|
nova floating-ip-associate $VM_ID $FIP_ADD
|
||||||
|
exit_on_failure "Unable to associate floating IP $FIP_ADD to VM ${UUID}"
|
23
tools/print_order.py
Executable file
23
tools/print_order.py
Executable file
@ -0,0 +1,23 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
import operator
|
||||||
|
|
||||||
|
from ospurge import utils
|
||||||
|
|
||||||
|
resource_managers = sorted(
|
||||||
|
[cls for cls in utils.get_all_resource_classes()],
|
||||||
|
key=operator.methodcaller('order')
|
||||||
|
)
|
||||||
|
|
||||||
|
for cls in resource_managers:
|
||||||
|
print("{} => {}".format(cls.__name__, cls.ORDER))
|
46
tools/stress.sh
Executable file
46
tools/stress.sh
Executable file
@ -0,0 +1,46 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
# Be strict (but not too much: '-u' doesn't always play nice with devstack)
|
||||||
|
set -eo pipefail
|
||||||
|
|
||||||
|
readonly PROGDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
|
||||||
|
source ~/devstack/openrc admin admin
|
||||||
|
|
||||||
|
###############################
|
||||||
|
### Set quotas
|
||||||
|
###############################
|
||||||
|
project_id=$(openstack token issue | awk '/ project_id /{print $4}')
|
||||||
|
openstack quota set --subnets 15 ${project_id}
|
||||||
|
openstack quota set --networks 15 ${project_id}
|
||||||
|
openstack quota set --volumes 15 ${project_id}
|
||||||
|
openstack quota set --snapshots 15 ${project_id}
|
||||||
|
openstack quota set --instances 15 ${project_id}
|
||||||
|
openstack quota set --secgroups 15 ${project_id}
|
||||||
|
openstack quota set --routers 15 ${project_id}
|
||||||
|
openstack quota set --backups 15 ${project_id}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
###############################
|
||||||
|
### Populate project
|
||||||
|
###############################
|
||||||
|
seq 12 | parallel --halt-on-error 1 -n0 -j3 ${PROGDIR}/populate.sh
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
###############################
|
||||||
|
### Cleanup project
|
||||||
|
###############################
|
||||||
|
tox -e run -- --os-cloud devstack-admin --purge-own-project --verbose
|
@ -1,19 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
function exit_on_failure {
|
|
||||||
RET_CODE=$?
|
|
||||||
ERR_MSG=$1
|
|
||||||
if [ $RET_CODE -ne 0 ]; then
|
|
||||||
echo $ERR_MSG
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function exit_if_empty {
|
|
||||||
STRING=${1:-}
|
|
||||||
ERR_MSG=${2:-}
|
|
||||||
if [ -z "$STRING" ]; then
|
|
||||||
echo $ERR_MSG
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
}
|
|
75
tox.ini
75
tox.ini
@ -1,33 +1,68 @@
|
|||||||
[tox]
|
[tox]
|
||||||
envlist = pep8,py27
|
envlist = pep8,pip-check-reqs,coverage
|
||||||
minversion = 1.6
|
minversion = 1.9
|
||||||
skipsdist = True
|
skipsdist = True
|
||||||
|
|
||||||
[testenv]
|
[testenv]
|
||||||
usedevelop = True
|
deps =
|
||||||
install_command = pip install -U {opts} {packages}
|
-r{toxinidir}/requirements.txt
|
||||||
setenv = VIRTUAL_ENV={envdir}
|
|
||||||
OS_STDOUT_NOCAPTURE=False
|
|
||||||
OS_STDERR_NOCAPTURE=False
|
|
||||||
deps = -r{toxinidir}/requirements.txt
|
|
||||||
-r{toxinidir}/test-requirements.txt
|
-r{toxinidir}/test-requirements.txt
|
||||||
commands = python setup.py testr --testr-args='{posargs}'
|
basepython = python3.5
|
||||||
|
commands =
|
||||||
|
python setup.py testr --testr-args='{posargs}'
|
||||||
|
|
||||||
[testenv:venv]
|
[testenv:run]
|
||||||
commands = {posargs}
|
usedevelop=True
|
||||||
|
deps =
|
||||||
|
-r{toxinidir}/requirements.txt
|
||||||
|
commands =
|
||||||
|
ospurge {posargs:--help}
|
||||||
|
|
||||||
[testenv:pep8]
|
[testenv:pep8]
|
||||||
|
skip_install = True
|
||||||
|
whitelist_externals = bash
|
||||||
|
deps =
|
||||||
|
-r{toxinidir}/test-requirements.txt
|
||||||
commands =
|
commands =
|
||||||
flake8
|
flake8 {posargs}
|
||||||
bashate tools/ospopulate.bash tools/utils.bash
|
bash -c "find {toxinidir}/tools -type f -name *.sh \
|
||||||
|
-print0 | xargs -0 bashate -v -iE006 -eE005,E042"
|
||||||
|
|
||||||
[flake8]
|
[testenv:coverage]
|
||||||
# E501 line too long
|
commands =
|
||||||
ignore = E501
|
coverage erase
|
||||||
show-source = True
|
coverage run --source=ospurge -m unittest discover --verbose
|
||||||
exclude = .venv,.tox,dist,doc,*egg,build
|
coverage report --omit="ospurge/tests/*" --show-missing --skip-covered --fail-under 100
|
||||||
|
|
||||||
|
[testenv:mypy]
|
||||||
|
skip_install = True
|
||||||
|
deps =
|
||||||
|
-r{toxinidir}/test-requirements.txt
|
||||||
|
commands =
|
||||||
|
mypy --check-untyped-defs --disallow-untyped-defs --silent-imports ospurge
|
||||||
|
|
||||||
|
[testenv:pip-check-reqs]
|
||||||
|
# Do not install test-requirements as that will pollute the virtualenv for
|
||||||
|
# determining missing packages.
|
||||||
|
# This also means that pip-check-reqs must be installed separately, outside
|
||||||
|
# of the requirements.txt files
|
||||||
|
deps = -r{toxinidir}/requirements.txt
|
||||||
|
pip_check_reqs
|
||||||
|
commands=
|
||||||
|
pip-extra-reqs -d ospurge
|
||||||
|
pip-missing-reqs -d ospurge
|
||||||
|
|
||||||
[testenv:docs]
|
[testenv:docs]
|
||||||
|
whitelist_externals = echo
|
||||||
|
skip_install = True
|
||||||
|
deps =
|
||||||
|
-r{toxinidir}/test-requirements.txt
|
||||||
commands =
|
commands =
|
||||||
doc8 -e .rst doc README.rst
|
doc8 -e .rst doc/source README.rst
|
||||||
python setup.py build_sphinx -b html
|
python setup.py build_sphinx -E -b html
|
||||||
|
echo "Documentation location: {toxinidir}/doc/build/html/index.html"
|
||||||
|
|
||||||
|
[flake8]
|
||||||
|
ignore = H404,H405
|
||||||
|
enable-extensions = H106,H203,H904
|
||||||
|
show-source = True
|
||||||
|
Loading…
x
Reference in New Issue
Block a user