Fix the typos in the ceilometer specs
Scan the ceilometer specs repository. Filter the result and fix the mistakes. Change-Id: Idfbc41c3b681aa57cd5153dffc2dae600a58efb9
This commit is contained in:
parent
a4125ab4f3
commit
132f377e85
@ -120,7 +120,7 @@ None, we will continue to store a new resource per sample
|
|||||||
Performance/Scalability Impacts
|
Performance/Scalability Impacts
|
||||||
-------------------------------
|
-------------------------------
|
||||||
|
|
||||||
The read performance should improve as we will not have have a giant Sample
|
The read performance should improve as we will not have a giant Sample
|
||||||
table anymore but smaller, tailored Resource, Meter, and Sample tables.
|
table anymore but smaller, tailored Resource, Meter, and Sample tables.
|
||||||
The write performance is not expected to degrade noticeably.
|
The write performance is not expected to degrade noticeably.
|
||||||
|
|
||||||
|
@ -45,7 +45,7 @@ types of changes (detailed below): Adding support for Ceilometer queries in
|
|||||||
the `javelin code`_ and adding Ceilometer specific entries to the resource
|
the `javelin code`_ and adding Ceilometer specific entries to the resource
|
||||||
definitions.
|
definitions.
|
||||||
|
|
||||||
The main check that will be facillitated by javelin2 is ensuring the sanity of
|
The main check that will be facilitated by javelin2 is ensuring the sanity of
|
||||||
api queries with a time range that spans the entire window of time within which
|
api queries with a time range that spans the entire window of time within which
|
||||||
the Grenade test runs (e.g. -+12 hours from now).
|
the Grenade test runs (e.g. -+12 hours from now).
|
||||||
|
|
||||||
|
@ -12,7 +12,7 @@ https://blueprints.launchpad.net/ceilometer/+spec/grenade-upgrade-testing
|
|||||||
|
|
||||||
Integrated projects are required to participate in the `grenade`_ upgrade
|
Integrated projects are required to participate in the `grenade`_ upgrade
|
||||||
testing harness. Ceilometer was integrated before these requirements were
|
testing harness. Ceilometer was integrated before these requirements were
|
||||||
added but the requirements apply retroactively. Therefore ceilometer must be
|
added but the requirements apply retroactively. Therefore, ceilometer must be
|
||||||
added to the harness.
|
added to the harness.
|
||||||
|
|
||||||
.. _grenade: https://github.com/openstack-dev/grenade
|
.. _grenade: https://github.com/openstack-dev/grenade
|
||||||
|
@ -164,7 +164,7 @@ References
|
|||||||
==========
|
==========
|
||||||
|
|
||||||
* `Ironic spec`_
|
* `Ironic spec`_
|
||||||
* `Review in progress`_ for sending notifcation from Ironic.
|
* `Review in progress`_ for sending notification from Ironic.
|
||||||
* `Sample data`_
|
* `Sample data`_
|
||||||
|
|
||||||
.. _Ironic spec: https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer
|
.. _Ironic spec: https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer
|
||||||
|
@ -82,7 +82,7 @@ REST API impact
|
|||||||
|
|
||||||
None
|
None
|
||||||
|
|
||||||
No API method is either added or changed. Nevertheless the new error http
|
No API method is either added or changed. Nevertheless, the new error http
|
||||||
response code (HTTP 403) will be returned when the alarm quotas are exceeded.
|
response code (HTTP 403) will be returned when the alarm quotas are exceeded.
|
||||||
|
|
||||||
Security impact
|
Security impact
|
||||||
|
@ -70,7 +70,7 @@ Proposed change
|
|||||||
We propose to solve the problem by moving access control from calls to the ACL
|
We propose to solve the problem by moving access control from calls to the ACL
|
||||||
to applying a decorator to the API methods. Each publicly accessible API
|
to applying a decorator to the API methods. Each publicly accessible API
|
||||||
method would have a decorator pointing to a new RBAC module. The RBAC module
|
method would have a decorator pointing to a new RBAC module. The RBAC module
|
||||||
decorator would use rules defined in policy.json to determine accesibility of
|
decorator would use rules defined in policy.json to determine accessibility of
|
||||||
methods by a caller.
|
methods by a caller.
|
||||||
|
|
||||||
This would allow fine-grained, role-dependent, method-specific access control.
|
This would allow fine-grained, role-dependent, method-specific access control.
|
||||||
@ -135,7 +135,7 @@ manage.
|
|||||||
|
|
||||||
We believe this risk is mitigated by the ability to ship the basic code with
|
We believe this risk is mitigated by the ability to ship the basic code with
|
||||||
only the current context_is_admin rule enabled. Such configuration would not
|
only the current context_is_admin rule enabled. Such configuration would not
|
||||||
allow additional Keystone roles to grant new priviliges unless the system
|
allow additional Keystone roles to grant new privileges unless the system
|
||||||
operators explicitly added new rules to the policy file.
|
operators explicitly added new rules to the policy file.
|
||||||
|
|
||||||
Pipeline impact
|
Pipeline impact
|
||||||
@ -148,7 +148,7 @@ Other end user impact
|
|||||||
|
|
||||||
This will have no direct impact on on python-ceilometerclient as roles and
|
This will have no direct impact on on python-ceilometerclient as roles and
|
||||||
their associated rules would be established in keystone and interpreted by
|
their associated rules would be established in keystone and interpreted by
|
||||||
Ceilometer API. Nevertheless the python-ceilometerclient will benefit from the
|
Ceilometer API. Nevertheless, the python-ceilometerclient will benefit from the
|
||||||
increase security provided by the new policy support. For instance, collector
|
increase security provided by the new policy support. For instance, collector
|
||||||
agent (or any other ceilometer service) can have a special role associated
|
agent (or any other ceilometer service) can have a special role associated
|
||||||
with it disallowing other services (with admin status) to post data in the
|
with it disallowing other services (with admin status) to post data in the
|
||||||
|
@ -20,7 +20,7 @@ Problem description
|
|||||||
|
|
||||||
|
|
||||||
Currently, ceilometer doesn't has the capability to get the meters from ceph
|
Currently, ceilometer doesn't has the capability to get the meters from ceph
|
||||||
object storage, when the ceph is used as a object storage, instead of swift
|
object storage, when the ceph is used as an object storage, instead of swift
|
||||||
object storage).
|
object storage).
|
||||||
|
|
||||||
|
|
||||||
|
@ -43,7 +43,7 @@ Alternatives
|
|||||||
|
|
||||||
- We can continue to publish via the pipeline but it is far too verbose and
|
- We can continue to publish via the pipeline but it is far too verbose and
|
||||||
will not solve all dependency issues.
|
will not solve all dependency issues.
|
||||||
- Have swift own it's own metrics and have it exist swift package. This is
|
- Have swift own its own metrics and have it exist swift package. This is
|
||||||
dependent on swift accepting something not completely scoped to swift
|
dependent on swift accepting something not completely scoped to swift
|
||||||
internal functionality.
|
internal functionality.
|
||||||
- Drop support of swift middleware meters (ie. we won't test it but it'll just
|
- Drop support of swift middleware meters (ie. we won't test it but it'll just
|
||||||
|
@ -260,7 +260,7 @@ to be handled.
|
|||||||
Dependencies
|
Dependencies
|
||||||
============
|
============
|
||||||
|
|
||||||
This work is self contained but may add to the libraries required
|
This work is self-contained but may add to the libraries required
|
||||||
for testing (e.g. wsgi-intercept).
|
for testing (e.g. wsgi-intercept).
|
||||||
|
|
||||||
Testing
|
Testing
|
||||||
|
@ -18,7 +18,7 @@ Problem description
|
|||||||
|
|
||||||
Currently, metering and event data coexists on the same database. While
|
Currently, metering and event data coexists on the same database. While
|
||||||
related, there's a logical separation in the metering and event models
|
related, there's a logical separation in the metering and event models
|
||||||
where the data in each model has it's own unique data and structure; the
|
where the data in each model has its own unique data and structure; the
|
||||||
metering model can be best described as a time series while the event model is
|
metering model can be best described as a time series while the event model is
|
||||||
closer to an entity attribute model. As the models are different in what they
|
closer to an entity attribute model. As the models are different in what they
|
||||||
capture, it makes sense that deployers may choose to use different storage
|
capture, it makes sense that deployers may choose to use different storage
|
||||||
@ -27,8 +27,8 @@ drivers to store each data set.
|
|||||||
Proposed change
|
Proposed change
|
||||||
===============
|
===============
|
||||||
|
|
||||||
Similar to the work done to split alarming into it's own database, this
|
Similar to the work done to split alarming into its own database, this
|
||||||
blueprint is to allow for event related data to be stored in it's own database.
|
blueprint is to allow for event related data to be stored in its own database.
|
||||||
|
|
||||||
Alternatives
|
Alternatives
|
||||||
------------
|
------------
|
||||||
|
@ -109,7 +109,7 @@ Other deployer impact
|
|||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
The ElasticSearch storage driver is to have feature parity with the rest of
|
The ElasticSearch storage driver is to have feature parity with the rest of
|
||||||
the currently available event driver backends. It will add a elasticsearch-py
|
the currently available event driver backends. It will add an elasticsearch-py
|
||||||
client dependency should the driver be selected.
|
client dependency should the driver be selected.
|
||||||
|
|
||||||
Developer impact
|
Developer impact
|
||||||
|
@ -38,7 +38,7 @@ Add the method 'inspect_disk_iops' in the Inspector and implement it in the
|
|||||||
HyperVInspector. The method will return a DiskIOPSStats object.
|
HyperVInspector. The method will return a DiskIOPSStats object.
|
||||||
|
|
||||||
The metric value will be fetched from Hyper-V VMs, located in the
|
The metric value will be fetched from Hyper-V VMs, located in the
|
||||||
Msvm_AggregationMetricValue object (further refered to as metric object)
|
Msvm_AggregationMetricValue object (further referred to as metric object)
|
||||||
associated with the VMs. The metric object's 'MetricDefinitionId' must be
|
associated with the VMs. The metric object's 'MetricDefinitionId' must be
|
||||||
equal to the 'Id' of Msvm_AggregationMetricDefinition object having the
|
equal to the 'Id' of Msvm_AggregationMetricDefinition object having the
|
||||||
Caption 'Average Normalized Disk Throughput'.
|
Caption 'Average Normalized Disk Throughput'.
|
||||||
|
@ -40,7 +40,7 @@ configure the kafka publisher plugin in pipeline.yaml file like::
|
|||||||
- kafka://<broker_ip>?topic=<topic_name>
|
- kafka://<broker_ip>?topic=<topic_name>
|
||||||
|
|
||||||
This way any application that is trying to consume streaming ceilometer
|
This way any application that is trying to consume streaming ceilometer
|
||||||
metrics via Kafka, can directly consume the ceilometer samples. For example
|
metrics via Kafka, can directly consume the ceilometer samples. For example,
|
||||||
projects like monasca - https://github.com/stackforge/monasca-thresh can consume
|
projects like monasca - https://github.com/stackforge/monasca-thresh can consume
|
||||||
the ceilometer metrics that are published by the Ceilometer Kafka publisher.
|
the ceilometer metrics that are published by the Ceilometer Kafka publisher.
|
||||||
|
|
||||||
@ -70,7 +70,7 @@ connections are under implementation. More information can be found in the refer
|
|||||||
Pipeline impact
|
Pipeline impact
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Provide new options to specify kafka publiser as a ceilometer publisher
|
Provide new options to specify kafka publisher as a ceilometer publisher
|
||||||
|
|
||||||
Other end user impact
|
Other end user impact
|
||||||
---------------------
|
---------------------
|
||||||
|
@ -44,7 +44,7 @@ The Neutron event topic names have the following pattern,
|
|||||||
* <resource>.delete.end
|
* <resource>.delete.end
|
||||||
|
|
||||||
For this implementation we will track the end events as those give us the most information
|
For this implementation we will track the end events as those give us the most information
|
||||||
with respect to event payload. Also end events are more informative to track the existance
|
with respect to event payload. Also end events are more informative to track the existence
|
||||||
of a resource and its usage over time.
|
of a resource and its usage over time.
|
||||||
|
|
||||||
Alternatives
|
Alternatives
|
||||||
|
@ -89,7 +89,7 @@ Alternatives include:
|
|||||||
to implement what are called "SLA checks" via Rally. These allow a
|
to implement what are called "SLA checks" via Rally. These allow a
|
||||||
check job which can fail based on a severe degradation in
|
check job which can fail based on a severe degradation in
|
||||||
performance. These are not being done now to first gain experience
|
performance. These are not being done now to first gain experience
|
||||||
with Rally as well as to comparmentalize work in actionable
|
with Rally as well as to compartmentalize work in actionable
|
||||||
chunks.
|
chunks.
|
||||||
|
|
||||||
Data model impact
|
Data model impact
|
||||||
|
@ -128,7 +128,7 @@ Developer impact
|
|||||||
----------------
|
----------------
|
||||||
|
|
||||||
This change may increase the amount of asynchrony in some tests.
|
This change may increase the amount of asynchrony in some tests.
|
||||||
For example the time between posting a sample and being able to
|
For example, the time between posting a sample and being able to
|
||||||
retrieve that sample may become more unpredictable. As it is already
|
retrieve that sample may become more unpredictable. As it is already
|
||||||
unpredictable any tests which rely on immediate retrieval are bad
|
unpredictable any tests which rely on immediate retrieval are bad
|
||||||
anyway, so we should fix that.
|
anyway, so we should fix that.
|
||||||
|
@ -47,7 +47,7 @@ alarm history.
|
|||||||
However their scale might be completely different, so the expiration
|
However their scale might be completely different, so the expiration
|
||||||
frequency shouldn't be the same.
|
frequency shouldn't be the same.
|
||||||
|
|
||||||
Therefore we will have separate TTL.
|
Therefore, we will have separate TTL.
|
||||||
|
|
||||||
|
|
||||||
Data model impact
|
Data model impact
|
||||||
|
@ -113,7 +113,7 @@ Performance/Scalability Impacts
|
|||||||
-------------------------------
|
-------------------------------
|
||||||
|
|
||||||
Less sample data. Less equivalent data in Events (because of trait filtering).
|
Less sample data. Less equivalent data in Events (because of trait filtering).
|
||||||
Events are a bit more scalable in it's design (maybe not the SQL backend).
|
Events are a bit more scalable in its design (maybe not the SQL backend).
|
||||||
|
|
||||||
Other deployer impact
|
Other deployer impact
|
||||||
---------------------
|
---------------------
|
||||||
|
@ -29,7 +29,7 @@ work and making user reports.
|
|||||||
Proposed change
|
Proposed change
|
||||||
===============
|
===============
|
||||||
|
|
||||||
Add a implementation of method get_meter_statistics via MongoDB
|
Add an implementation of method get_meter_statistics via MongoDB
|
||||||
aggregation pipeline framework.
|
aggregation pipeline framework.
|
||||||
|
|
||||||
From MongoDB docs:
|
From MongoDB docs:
|
||||||
@ -66,7 +66,7 @@ This option allows to write intermediate staging data to temporary files.
|
|||||||
So, primary risks of this approach are a necessity of free space
|
So, primary risks of this approach are a necessity of free space
|
||||||
on disk and a slow performance of disk writing and reading.
|
on disk and a slow performance of disk writing and reading.
|
||||||
|
|
||||||
Accordingly researches and MongoDB docs, the "$sort" command creates
|
Accordingly, researches and MongoDB docs, the "$sort" command creates
|
||||||
the most amount of intermediate data for follow stages. So, in practice
|
the most amount of intermediate data for follow stages. So, in practice
|
||||||
this stage prepares data whose size is close to new index size.
|
this stage prepares data whose size is close to new index size.
|
||||||
In same time, the indexed fields sorting (like timestamp
|
In same time, the indexed fields sorting (like timestamp
|
||||||
|
@ -4,9 +4,9 @@
|
|||||||
|
|
||||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||||
|
|
||||||
=====================
|
=======================
|
||||||
Pollsters No Tranform
|
Pollsters No Transform
|
||||||
=====================
|
=======================
|
||||||
|
|
||||||
https://blueprints.launchpad.net/ceilometer/+spec/pollsters-no-transform
|
https://blueprints.launchpad.net/ceilometer/+spec/pollsters-no-transform
|
||||||
|
|
||||||
@ -30,7 +30,7 @@ Proposed change
|
|||||||
|
|
||||||
One way to resolve this complexity is to not do transformations in the
|
One way to resolve this complexity is to not do transformations in the
|
||||||
pollsters. Instead when new samples are polled, format them as notifications
|
pollsters. Instead when new samples are polled, format them as notifications
|
||||||
and push them onto the notification bus to be retreived by the notification
|
and push them onto the notification bus to be retrieved by the notification
|
||||||
agent. The pipeline within that agent will then do any required transformations.
|
agent. The pipeline within that agent will then do any required transformations.
|
||||||
|
|
||||||
Besides clarifying the focus of the polling agents it is likely this change
|
Besides clarifying the focus of the polling agents it is likely this change
|
||||||
|
@ -93,7 +93,7 @@ Next steps for HA notification agent:
|
|||||||
Pros:
|
Pros:
|
||||||
Preferred and Easiest approach
|
Preferred and Easiest approach
|
||||||
Cons:
|
Cons:
|
||||||
It doesnt centralize the pipeline definition and runs the risk of agents
|
It doesn't centralize the pipeline definition and runs the risk of agents
|
||||||
diverging on their pipeline definitions
|
diverging on their pipeline definitions
|
||||||
|
|
||||||
This means we're allowing any kind of error levels due to the fact file
|
This means we're allowing any kind of error levels due to the fact file
|
||||||
@ -134,7 +134,7 @@ Alternatives
|
|||||||
|
|
||||||
3. Use automated deployment tools - Puppet, Chef, Ansible to change pipeline
|
3. Use automated deployment tools - Puppet, Chef, Ansible to change pipeline
|
||||||
definitions. While this automates changing pipeline definitions across
|
definitions. While this automates changing pipeline definitions across
|
||||||
multiple agents, it doesnt bring the value-add of on-the-fly updates to the
|
multiple agents, it doesn't bring the value-add of on-the-fly updates to the
|
||||||
agent, without incurring a restart of the daemons.
|
agent, without incurring a restart of the daemons.
|
||||||
|
|
||||||
Pros:
|
Pros:
|
||||||
|
@ -118,7 +118,7 @@ Ongoing maintainer:
|
|||||||
Work Items
|
Work Items
|
||||||
----------
|
----------
|
||||||
|
|
||||||
* Seperate eventlet and non-eventlet commands into two different module
|
* Separate eventlet and non-eventlet commands into two different module
|
||||||
directories, starting with the api module.
|
directories, starting with the api module.
|
||||||
* Compare and contrast performance of the API server with and without eventlet
|
* Compare and contrast performance of the API server with and without eventlet
|
||||||
paying specific attention to the impact on accessing the storage layer. Mike
|
paying specific attention to the impact on accessing the storage layer. Mike
|
||||||
|
@ -10,7 +10,7 @@ Spliting Ceilometer alarming
|
|||||||
|
|
||||||
https://blueprints.launchpad.net/ceilometer/+spec/split-ceilometer-alarming
|
https://blueprints.launchpad.net/ceilometer/+spec/split-ceilometer-alarming
|
||||||
|
|
||||||
Ceilometer evolved from a a simple meter gathering component to a lot of
|
Ceilometer evolved from a simple meter gathering component to a lot of
|
||||||
different component doing different things. The storage layer has been
|
different component doing different things. The storage layer has been
|
||||||
abstracted during Juno and Kilo and is going to be handled by Gnocchi. This
|
abstracted during Juno and Kilo and is going to be handled by Gnocchi. This
|
||||||
spec proposes that the work continues so that the alarming subsystem of
|
spec proposes that the work continues so that the alarming subsystem of
|
||||||
|
@ -143,7 +143,7 @@ Data model impact
|
|||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
* The "threshold_rules" will be a dict that can be include multiple threshold
|
* The "threshold_rules" will be a dict that can be include multiple threshold
|
||||||
rules and the the key of the dict will be "and" or "or". But this has no
|
rules and the key of the dict will be "and" or "or". But this has no
|
||||||
effect on data model, because the rules is stored as json in db.
|
effect on data model, because the rules is stored as json in db.
|
||||||
|
|
||||||
REST API impact
|
REST API impact
|
||||||
|
@ -24,7 +24,7 @@ Proposed change
|
|||||||
===============
|
===============
|
||||||
Add a new notifier publisher and use this publisher in event transformer,
|
Add a new notifier publisher and use this publisher in event transformer,
|
||||||
when there is a new transformed event from events pipeline
|
when there is a new transformed event from events pipeline
|
||||||
transformers, convert it to notification, send it back to ceilometer notificaton listener,
|
transformers, convert it to notification, send it back to ceilometer notification listener,
|
||||||
the notification payload is generated from event traits,
|
the notification payload is generated from event traits,
|
||||||
then the sample pipeline will convert and publish it into sample.
|
then the sample pipeline will convert and publish it into sample.
|
||||||
|
|
||||||
|
@ -114,7 +114,7 @@ Work Items
|
|||||||
|
|
||||||
In current cycle:
|
In current cycle:
|
||||||
|
|
||||||
* send a email to do a user survey to get feedbacks about this change
|
* send an email to do a user survey to get feedbacks about this change
|
||||||
* deprecate mongodb and Hbase storage support and log warning messages if
|
* deprecate mongodb and Hbase storage support and log warning messages if
|
||||||
config mongodb or Hbase as storage driver
|
config mongodb or Hbase as storage driver
|
||||||
* add alarm data migration tool for migrating data from mongodb/Hbase to sql
|
* add alarm data migration tool for migrating data from mongodb/Hbase to sql
|
||||||
|
@ -65,7 +65,7 @@ There are two upgrade paths to handle -- both require no code change:
|
|||||||
against same pipeline sets.
|
against same pipeline sets.
|
||||||
4. The new polling agent can be started alongside the old agent only if
|
4. The new polling agent can be started alongside the old agent only if
|
||||||
no new pollsters were added. If not, new polling agents must start only
|
no new pollsters were added. If not, new polling agents must start only
|
||||||
in it's own partitioning group and poll only the new pollsters. After
|
in its own partitioning group and poll only the new pollsters. After
|
||||||
all old agents are upgraded, the polling agents can be changed to poll
|
all old agents are upgraded, the polling agents can be changed to poll
|
||||||
both new pollsters AND the old ones.
|
both new pollsters AND the old ones.
|
||||||
5. API service management is handled by WSGI so there is only ever one
|
5. API service management is handled by WSGI so there is only ever one
|
||||||
|
Loading…
x
Reference in New Issue
Block a user