The subscription confirmation feature will contain four patches:
1. webhook with mongoDB
2. email with mongoDB
3. webhook with redis
4. email with redis
This patch is the first part of subscription confirmation feature for
webhook with MongoDB. Others will be achieved in follow patches.
This patch did:
1. Add v2/queue/<queue_name>/subscription/<subscription_id>/confirm
endpoint.
2. Add a new config option: "require_confirmation".
3. Add a new property "confirmed" to subscription resource for
MongoDB driver.
4. Add a new policy "subscription: confirm".
5. Add a new property "message type" for notification.
6. Use the pre-signed url in confirm request.
8. Re-use POST subscription to allow re-confirm.
9. Update notification for webhook subscription with mongoDB.
10. Support unsubscrib the subscription
11. Add tests for the feature.
12. Add doc and sample.
Docimpact
APIimpact
Change-Id: Id38d4a5b4f9303b12e22e2b5c248facda4c00143
Implements: blueprint subscription-confirmation-support
This adds the ability to send keystone authentified notifications using
trusts. To do so, you specify the posted URL with the "trust+" prefix,
and Zaqar will create and store a trust when subscribing to a queue, if
the trust not provided in the subscription options
It also add a capability to the webhook task to be able to send more
structured data in the notification, allowing to include the Zaqar
message in the data.
blueprint mistral-notifications
DocImpact
Change-Id: I12b9c1b34cdd220fcf1bdc2720043d4a8f75dc85
Currently generated zaqar.conf.sample is missing oslo.cache library
options, but they are used by Zaqar and are important in production
install.
This patch makes the command 'tox -e gencofig' generate also oslo.cache
options.
Change-Id: Ia8f78fd5a106888882f882aed8d8355e7e1e459e
Closes-Bug: 1560707
The current configuration is not generating the correct configuration
file. The net result is that even when auth_strategy option is set to
keystone, the 'keystone_authtoken' section is still missing. This patch
fixes the mistake.
Change-Id: I2a37cde436736d39be93cd082a8ab13b58e21133
zaqar.conf.sample file can be generated by tox -e genconfig commnad.
The generated zaqar.conf.sample file is supposed to have a [drivers]
section as per zaqar README.rst
However, the [drivers] section is not present in zaqar.conf.sample file
This patch fixes it
Closes-bug: #1501130
Change-Id: Ic1d98680fe1040f68944b529b1c4c4ec2c835cea
This commit adds support for RBAC using oslo.policy. This allows Zaqar
for having a fine-grained access control to the resources it exposes.
As of this patch, the implementation allows to have access control in
a per-operation basis rather than specific resources.
Co-Authored-by: Thomas Herve <therve@redhat.com>
Co-Authored-by: Flavio Percoco <flaper87@gmail.com>
blueprint: fine-grained-permissions
Change-Id: I90374a11815ac2bd9d31768588719d2d4c4e7f5d
Add a sample configuration for running the wsgi transport using uwsgi,
and make devstack uses it, while running zaqar-server with websocket
transport.
This allows running both websockets and wsgi transports on devstack.
Change-Id: Ifac7461ec6b0501b1b9021030d9c173cf368a59b
Given we are going to implement notification, so the 'queues'
package is not suitable for current scope of zaqar. This
patch will remove the 'queues' package.
Partially implements: blueprint notifications
Change-Id: I6984f31f4bd1e646b585c45c088ed239b58587c4
Oslo's config generator has been moved under oslo.config, which doesn't
require using a bash script anymore.
The patch removes the old scripts and updates the generation task in
tox.ini
Closes-bug: #1373800
Change-Id: Ia757b0d141f8557144108d386496d1e9bfc7333f
The max_message_size option name is misleading. The option
determines max size of message post body. Replacing the
option name to max_messages_post_size.
Change-Id: Ie01cee026e7ebf530cdb2709e2c17d030ad95480
Closes-Bug: #1357397
This patch implements the standard controllers for the redis
storage driver. It has been tested against a localhost Redis
server with ZAQAR_TEST_REDIS=1.
Change-Id: Ib7c100afd11a0410c3f241c1925d5aaf172ce6a8
Partially-Implements: blueprint redis-storage-driver
This commit adds several enhancements to benchmarking tool: server_url
and path to messages now can be configured in config file. Default
output of program has been changed: now it prints values in json so they
can be parsed more easily. Previous human readable representation is
accessible via --verbose flag.
The `total_requests` metric now shows all performed requests (either
failed or successful) and new metric - `successful_requests` - was
introduced to store count of successful requests.
Change-Id: Id6fe4b2046394a348ba07eb5b2b003c6024b78b0
Partially-implements: blueprint gen-bench-reports
auth_token middleware in python-keystoneclient is deprecated and has
been moved to the keystonemiddleware repo.
Change-Id: I174b62d035b84aff1cf0d60efb84f7650445f42c
Closes-Bug: #1342274
This patch renames every package, file, match of Marconi in the codebase
to Zaqar *except* for the .gitreview file, which will have to be updated
*after* I8e587af588d9be0b5ebbab4b0f729b106a2ae537 lands.
Implements blueprint: project-rename
Change-Id: I63cf2c680cead4641f3e430af379452058bce5b3
Expose 'ssl_keyfile','ssl_certfile','ssl_cert_reqs' and
'ssl_ca_certs' options for maximum security. By default, ssl
is not enabled except that ssl parameter was included in the
mongodb uri directly, and ssl_cert_reqs = CERT_REQUIRED which
means user must provide the 'ssl_ca_certs' if ssl is enabled
by adding the ssl parameter in the mongodb uri.
Change-Id: I67cb5a9b2d76625de2932c854d0a696e9118ca6b
Closes-Bug: #1328720
This patch adds oslo's config generator to the source tree and uses it
to generate marconi's sample configs. It also adds a check to pep8 that
verifies the config file is up-to-date.
Change-Id: Iec7defa244dc8649a5c832bb81b9ec6f30f0ee37
Now we're supporting sqlalchemy and the sqlite is deprecated.
So the marconi.conf-sample should be update to reflect the
change so as to avoid confusion.
Closes-Bug: #1288619
Change-Id: Ief5dad6345dc24e70af18e9e47d0f8dd384cee47
This patch adds two features to our current sqlalchemy driver:
- option to configure connection URI for driver
- Skeleton of ControlDriver written
With the ControlDriver, the expected methods were stubbed out.
Small fix: all controller methods not yet implemented now raise
NotImplementedError.
Change-Id: I1cd4a4d75cbbee7f0ff574c5be4d11660359ab7e
Partially-Implements: blueprint: sql-storage-driver
This patch contains several misc. changes to queue, message, and
claim limits to reduce confusion and bring the implementation in
line with the v1 spec.
1. Removed a couple of WSGI driver config options that are
no longer needed now that we have redefined (and simplified) how
we constrain message and metadata size.
metadata_max_length = 65536
content_max_length = 262144
2. Renamed options to be more readable and consistent
3. Moved options to [transport] section
4. Made max messages that can be claimed its own setting, to reduce confusion
5. Removed enforcing an upper limit on the number of messages that can be
posted; this was never in the spec, and appears to be gold-plating. Now, the
only upper limit is max_message_size.
6. Removed the check on the size of a create claim request since (1) it is
not part of the API spec, and (2) sanity-checks like that are best done by
the web server, before a request even touches the app.
7. Migrated limits for storage driver interface params to static values,
since those defaults define the static contract between transport and
storage drivers.
8. Wrapped validation error messages in gettextutils._, and converted them
to use .format instead of %.
Change-Id: I1372e5002f030f5c8c47774ab00ca8ee7e12232d
Closes-Bug: #1270260
This patch removes some configuration files that were leftover from
the days of the proxy. Also, it removes mention of the proxy from
common.transport.version.
Change-Id: I88f7d6490f5b0d0bdbdc827c69a72180ab6c3a12
Changes [proxy:config] and [queues:config] into
just [drivers] since both these projects are
separate and so there's less repetition.
Change-Id: I982b5a08ed45426df17d9008854853c68c207608
Closes-Bug:#1231669
This change is made in prepartion for the upcoming sharded storage
features. Shard registration is a feature that only operators should
be able to do, and since the sharding is done within the queues
application, it was necessary to break this out into a separate API.
This patch adds a new configuration variable: admin_mode. It is used
to multiplex which version of the API is loaded. Furthermore, the
admin API is an enhanced version of the public API in that it allows
every route that the public API in addition to admin-only
endpoints. This should ease unit testing in future patches.
A few small refactorings were made, including:
- health resource moved to common transport location
- version module moved to common/transport
- pass config from bootstrap to transport driver
- pass cache in base transport driver
- convert base transport driver to use six.add_metaclass
- pass public bootstrap to bootstrap unit test
Change-Id: I0d6ff381afb25adb8a4b784a60b6d6eb71431245
Partially-implements: blueprint storage-sharding
This patch provides the plumbing for implementing storage
sharding across multiple backends. Sharding is agnostic to
storage driver type and transport type. The new feature is
optional, and disabled by default.
The design eschews placing any kind of sharding reverse proxy
in the network, allowing the storage drivers to continue
communicating directly with their respective backends.
Sharding can be enabled by setting the global "sharding"
option to True. Future patches will add a sharding section to
the config that can be used to tweak the way sharding works when
it is enabled.
Storage drivers are managed by a Catalog class. The Catalog is
responsible for registering and deregistering queues in the
catalog backend, and for looking up an appropriate driver,
according to which shard a particular queue has been assigned.
In the future, this design will make it straightforward to map
individual queues to different storage backends, according to user
preference.
FWIW, I considered enabling sharding by inserting the routing driver
as the last stage in the storage pipeline. However, it felt like
a hack for the following reasons:
* Doing so orphaned the regular, solitary driver that was
still always loaded at the end of the pipeline.
* Since the bootstrap was not aware of the sharding driver,
it could not be used to provide setup, so the catalog
object had to be turned into a singleton and options
had to always be loaded from the global config.
* The driver would have to be added to each controller
pipeline, and would have to always be the last stage in
the pipeline. Introducing a simple "sharded" boolean option
seemed to be a more straightforward, less error-prone way
for operators to enable sharding.
Partially-Implements: blueprint storage-sharding
Change-Id: I5190211e81fe4acd311b2cfdd0bae806cc3fec81
This patch moves pipeline setup into the bootstrap and out of
the storage driver base class, so that the base class can be
inherited by meta-drivers, such as the planned sharding manager,
without introducing a loop in the bootstrapping logic.
Now, a meta-driver is exposed to the transport object that
takes care of wiring up the pipeline for each resource
controller behind the scenes.
As part of this work, the pipeline config was modified to
support configuring different stages depending on the
resource. We create three instances of Pipeline anyway,
so it seemed to make sense to allow the operator to
configure the pipelines independently.
Partially-Implements: blueprint storage-pipeline
Change-Id: Ibdb7d0e9537b1eec38a13f4881df7462039bbf98
This patchset separates the configuration of the proxy from that of
the queues server. This was done in order to simplify the
configuration file for each, and because it is not expected that the
proxy and the queues servers would be launched on the same
host. Furthermore, many of the proxy options are not relevant to the
queues server.
Furthermore, to allow this, common.config had to be modified to take a
prog parameter. This enabled the ability to save multiple
configuration files to one directory. See below for details.
The new files are:
- etc/marconi-proxy.conf
- etc/marconi-queues.conf
They are expected to be saved to one of:
- ~/.marconi
- /etc/marconi
Regarding namespaces, queues specific options are associated with the
'queues:*' group and proxy specific options are associated to the
'proxy:*' group.
The appropriate changes are also applied to the test suite and
helpers.
Change-Id: I7cf25e47ecff47934b50c21000b31308e1a4c8a9
Implements: blueprint placement-service
This patch adds smarter configuration to the proxy in two steps:
1. mirror the transport implementation used in marconi.queues in
marconi.proxy
2. add a bootstrap file to take care of start up
Rationale: make configuration work, make deploying easy, make
alternate transport implementations feasible.
Another change: the unit tests are fixed by adding a few changes:
1. add drop functionality to the proxy storage interface
2. use drop/flush in test suite tearDown
3. rm tests.unit.test_config
4. delete queues at the end of the catalogue test (not yet robust)
The rationale for (3) was that test_config did not play nice with
other tests when they were registering their options, and failed as a
result. Furthermore, we should not need to test oslo.config.
Configuration changes: new fields in etc/marconi.conf
- drivers:proxy
- drivers:proxy:storage:{memory.mongodb}
- drivers:proxy:transport:wsgi
- oslo_cache
Also, fix: InternalServerError -> HTTPInternalServerError
Finally, redis was removed from requirements.txt.
Change-Id: If2365a1a738a3975fe6bde7bd07dfdee3460cecd
Implements: blueprint placement-service
This patch causes data to be partitioned across multiple databases in
order to reduce writer lock contention. The "queues" collection is
isolated in its own database, while the messages collection is partitioned
across several other databases. The number of partitions is configurable.
For example, if the number of partitions is set to 4, these databases
will be created in MongoDB:
marconi_queues
marconi_messages_p0
marconi_messages_p1
marconi_messages_p2
marconi_messages_p3
Implements: blueprint mongodb-multidb
Change-Id: I399f4a39e5377a381aef489b046bc14155ccb75b
This patch changes markers so that they are generated using a per-queue
side counter. A heuristic is used to mitigate a race condition. Due to the
new semantics, partial inserts are no longer possible due to collisions,
which ended up simplifying the retry logic for posting messages.
As a consequence of this patch, the last message posted no longer needs
to remain in the queue indefinitely, rendering marconi-gc unnecessary,
and so it has been removed.
Also, since the mongod GC worker runs once a minute, the queries no longer
filter out expired-but-not-yet-gc'd messages; on average, a message may
live more than 30 seconds passed it's expected lifetime, but I do not
think that this will harm or complicate any application building on top of
Marconi, practically speaking. That being said, it is worth calling out
in documentation.
Closes-Bug: #1218602
Change-Id: I34e24e7dd7c4e017c84eb5929ce37ad4c9e5266a
This patch brings together oslo.cache, oslo.config, and stevedore to
provide pluggable, hierarchical catalogue caching for marconi proxy.
Here's the list of changes:
- add configuration/driver loading at the app level
- remove select from proxy storage driver - unnecessary intelligence
at storage layer
- node.weighted_select -> partition.weighted_select (clearer name)
- forwarding logic further refactored, placed in own module
- caching logic placed in lookup module
- selector passed down at app level to handle round-robin state
globally
* open to becoming configurable
- adds several TODOs for a better proxy
Change-Id: I3bc568315e685486d63cdce3ec278c89e3f2b2bc
Implements: blueprint placement-service
This change add the following options to the config file:
[limits:storage]
default_queue_paging = 10
default_message_paging = 10
So that the default value of the "limit" URI param is now configurable.
This patch also removes the "actions" cruft.
Implements: blueprint configurable-default-paging
Change-Id: Id38295f1e607226a4259be7744e6ce2d7b6de12e
Although "message_paging_uplimit" also limit the maximum number of
IDs can be supplied in a URI, which is not quite a "page" (in bulk
deletion), but we don't need the configuration to be too precise.
Change-Id: I0737146f1212c82db18de35e35206d3932a46628
This patch add the configuration variables of transport driver-
specific limits and input validation to the sample config file,
so that the users don't need to open the source code to figure
out how to change the limits :)
Change-Id: I811b7dc4ca44d25a3cdb5402e11d599aa532ab39
The change Ib601418a7dbcad84c79b640b93bc5798ec4a62c4 added support for
stevedore but it didn't update the config sample file under etc/
This patch updates that file.
Change-Id: Ie225c929a12059f077993db9ced3d49981897c8f
This patch uses cfg.CONF as ConfigOpts in common/config instead of
creating a new instance of it. This is needed since most of Oslo's
modules use the global CONF object to register their config parameters.
The patch also moves log.setup call into Bootstrap and calls it after
configs are loaded.
Fixes bug: #1190524
Change-Id: Ib601418a7dbcad84c79b640b93bc5798ec4a62c4
This patch modifies the way message markers are generated and
used, such that Marconi can guarantee FIFO for a single message
producer posting to a single queue (FIFO for multiple producers
is not guaranteed.) At the same time, these changes guarantee
that observer clients will recieve messages once-and-only once
by removing race conditions inherent in timestamp-based markers.
Along the way, some technical debt was also addressed, particulary
regarding style, as well as some minor optimizations.
A new service, marconi-mongo-gc, was also added, and is required
when using the MongoDB storage driver for Marconi. This service
was necessary due to the special requirement that at least the
most recent message always remain in each queue, which is an
a-priori assumption upon which the FIFO/pagination algorithm
is based, and arises from the particular constraints imposed
by MongoDB's semantics.
Note: While implementing this blueprint, many calculated tradeoffs
were made in an attempt to balance performance, risk, readability,
and maintainability. The goal was to create a resonable baseline
implementation that can be iterated upon pending comprehensive
system and performance testing. Due to the many subtleties
of solving the FIFO/once-and-only-once problem for the MongoDB
driver, future contributors should excercise extreme caution when
modifying the algorithm introduced in this patch.
Changes include:
* Align text in comments
* Add counter to queue, messages
* Markers are now converted over to using monotonic counter
* Handle DuplicateKeyError
* Return resources in body as a response to a message POST
* Added mongo driver claims tests
* Return 503 when no messages were enqueued due to marker conflict
* Added backoff sleep between retries
* Added marconi-mongo-gc service. This is a new required servi
Implements: blueprint message-pagination
Change-Id: Ifa0bb9e1bc393545adc4c804d14c6eb2df01848c
Instead of using paste and depending on the middleware chain, use
ceilomenter's strategy and wrap transport's app using the auth_token
middleware.
NOTE: Tests with successful auth are missing
Implements blueprint remove-paste
Change-Id: I61e7d1fae6b80114e22c0a43b4e391e2d5443123
The patch replaces gunicorn with wsgiref since it doesn't make sense to
have gunicorn as dependency. Lets let deployers choose whatever the
prefer to use as container.
The patch also removes lib/* since marconi_paste is not needed anymore,
the wsgi app can now be accessed through:
`marconi.transport.wsgi.app:app`
Backward incompatible change: bind refers now to the host and a new
config variable was introduced to specify the port it should bind to.
Fixes bug: #1187280
Implements blueprint: transport-wsgi
Change-Id: I9f7767ace5c6553e75e2f4587032d7d64b9537c4
This patch imports oslo's log module. I modified it since it depends on
oslo's notifier which has many other dependencies Marconi doesn't need.
There's sort of an issue with gunicorn, it seems that once
log.setup('marconi') is called, gunicorn starts logging twice each
message.
Change-Id: I85e8354b7af9b67ea1f87e54b3b73250d4a1115a
Instead of specifying a custom location for the Marconi configuration file
through Paste, just use the defaults as specified by our config module.
Change-Id: Ia01e262e298f953412a36c75614c7656c3105db1
Implements: blueprint config-module
This patch adds gunicorn as dependency for starting and managing server
workers. Gunicorn allow users to chose whether to use workers based on a
sync process, eventlet or gevent. It also manages multiple workers, ssl,
process uid, process gid.
Implements blueprint transport-base
Change-Id: Ic328dbb937f8df69691911c6e6ff6b318bb1db34
This patch adds the first console_script for marconi server.
In order to use it it's enough to run setup.py either using develop or
install.
Side changes:
The config now uses sys.argv[1:] if no custom cli args have been
set.
Implements blueprint transport-base
Change-Id: I15732129d66b32fca2b818e9105b17f541094983