diff --git a/specs/juno/big-data-sql-v2.rst b/specs/juno/big-data-sql-v2.rst index 784312c..2575bfb 100644 --- a/specs/juno/big-data-sql-v2.rst +++ b/specs/juno/big-data-sql-v2.rst @@ -120,7 +120,7 @@ None, we will continue to store a new resource per sample Performance/Scalability Impacts ------------------------------- -The read performance should improve as we will not have have a giant Sample +The read performance should improve as we will not have a giant Sample table anymore but smaller, tailored Resource, Meter, and Sample tables. The write performance is not expected to degrade noticeably. diff --git a/specs/juno/grenade-resource-survivability.rst b/specs/juno/grenade-resource-survivability.rst index 99ccbd4..3ef95a7 100644 --- a/specs/juno/grenade-resource-survivability.rst +++ b/specs/juno/grenade-resource-survivability.rst @@ -45,7 +45,7 @@ types of changes (detailed below): Adding support for Ceilometer queries in the `javelin code`_ and adding Ceilometer specific entries to the resource definitions. -The main check that will be facillitated by javelin2 is ensuring the sanity of +The main check that will be facilitated by javelin2 is ensuring the sanity of api queries with a time range that spans the entire window of time within which the Grenade test runs (e.g. -+12 hours from now). diff --git a/specs/juno/grenade-upgrade-testing.rst b/specs/juno/grenade-upgrade-testing.rst index 4123aeb..d637516 100644 --- a/specs/juno/grenade-upgrade-testing.rst +++ b/specs/juno/grenade-upgrade-testing.rst @@ -12,7 +12,7 @@ https://blueprints.launchpad.net/ceilometer/+spec/grenade-upgrade-testing Integrated projects are required to participate in the `grenade`_ upgrade testing harness. Ceilometer was integrated before these requirements were -added but the requirements apply retroactively. Therefore ceilometer must be +added but the requirements apply retroactively. Therefore, ceilometer must be added to the harness. .. _grenade: https://github.com/openstack-dev/grenade diff --git a/specs/juno/ironic-notifications.rst b/specs/juno/ironic-notifications.rst index 0fb5cd0..3548f80 100644 --- a/specs/juno/ironic-notifications.rst +++ b/specs/juno/ironic-notifications.rst @@ -164,7 +164,7 @@ References ========== * `Ironic spec`_ -* `Review in progress`_ for sending notifcation from Ironic. +* `Review in progress`_ for sending notification from Ironic. * `Sample data`_ .. _Ironic spec: https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer diff --git a/specs/juno/quotas-on-alarms.rst b/specs/juno/quotas-on-alarms.rst index fc79e6d..adb61e7 100644 --- a/specs/juno/quotas-on-alarms.rst +++ b/specs/juno/quotas-on-alarms.rst @@ -82,7 +82,7 @@ REST API impact None -No API method is either added or changed. Nevertheless the new error http +No API method is either added or changed. Nevertheless, the new error http response code (HTTP 403) will be returned when the alarm quotas are exceeded. Security impact diff --git a/specs/kilo/ceilometer-rbac.rst b/specs/kilo/ceilometer-rbac.rst index a6851f4..c2945ce 100644 --- a/specs/kilo/ceilometer-rbac.rst +++ b/specs/kilo/ceilometer-rbac.rst @@ -70,7 +70,7 @@ Proposed change We propose to solve the problem by moving access control from calls to the ACL to applying a decorator to the API methods. Each publicly accessible API method would have a decorator pointing to a new RBAC module. The RBAC module -decorator would use rules defined in policy.json to determine accesibility of +decorator would use rules defined in policy.json to determine accessibility of methods by a caller. This would allow fine-grained, role-dependent, method-specific access control. @@ -135,7 +135,7 @@ manage. We believe this risk is mitigated by the ability to ship the basic code with only the current context_is_admin rule enabled. Such configuration would not -allow additional Keystone roles to grant new priviliges unless the system +allow additional Keystone roles to grant new privileges unless the system operators explicitly added new rules to the policy file. Pipeline impact @@ -148,7 +148,7 @@ Other end user impact This will have no direct impact on on python-ceilometerclient as roles and their associated rules would be established in keystone and interpreted by -Ceilometer API. Nevertheless the python-ceilometerclient will benefit from the +Ceilometer API. Nevertheless, the python-ceilometerclient will benefit from the increase security provided by the new policy support. For instance, collector agent (or any other ceilometer service) can have a special role associated with it disallowing other services (with admin status) to post data in the diff --git a/specs/kilo/ceilometer_ceph_integration.rst b/specs/kilo/ceilometer_ceph_integration.rst index aa1732d..c930759 100644 --- a/specs/kilo/ceilometer_ceph_integration.rst +++ b/specs/kilo/ceilometer_ceph_integration.rst @@ -20,7 +20,7 @@ Problem description Currently, ceilometer doesn't has the capability to get the meters from ceph -object storage, when the ceph is used as a object storage, instead of swift +object storage, when the ceph is used as an object storage, instead of swift object storage). diff --git a/specs/kilo/ceilometermiddleware.rst b/specs/kilo/ceilometermiddleware.rst index 813addf..7645fb1 100644 --- a/specs/kilo/ceilometermiddleware.rst +++ b/specs/kilo/ceilometermiddleware.rst @@ -43,7 +43,7 @@ Alternatives - We can continue to publish via the pipeline but it is far too verbose and will not solve all dependency issues. -- Have swift own it's own metrics and have it exist swift package. This is +- Have swift own its own metrics and have it exist swift package. This is dependent on swift accepting something not completely scoped to swift internal functionality. - Drop support of swift middleware meters (ie. we won't test it but it'll just diff --git a/specs/kilo/declarative-http-tests.rst b/specs/kilo/declarative-http-tests.rst index 29e42c3..14458f5 100644 --- a/specs/kilo/declarative-http-tests.rst +++ b/specs/kilo/declarative-http-tests.rst @@ -260,7 +260,7 @@ to be handled. Dependencies ============ -This work is self contained but may add to the libraries required +This work is self-contained but may add to the libraries required for testing (e.g. wsgi-intercept). Testing diff --git a/specs/kilo/dedicated-event-db.rst b/specs/kilo/dedicated-event-db.rst index 16d0692..13fa67d 100644 --- a/specs/kilo/dedicated-event-db.rst +++ b/specs/kilo/dedicated-event-db.rst @@ -18,7 +18,7 @@ Problem description Currently, metering and event data coexists on the same database. While related, there's a logical separation in the metering and event models -where the data in each model has it's own unique data and structure; the +where the data in each model has its own unique data and structure; the metering model can be best described as a time series while the event model is closer to an entity attribute model. As the models are different in what they capture, it makes sense that deployers may choose to use different storage @@ -27,8 +27,8 @@ drivers to store each data set. Proposed change =============== -Similar to the work done to split alarming into it's own database, this -blueprint is to allow for event related data to be stored in it's own database. +Similar to the work done to split alarming into its own database, this +blueprint is to allow for event related data to be stored in its own database. Alternatives ------------ diff --git a/specs/kilo/elasticsearch-event-db.rst b/specs/kilo/elasticsearch-event-db.rst index 9fe4232..78b4efd 100644 --- a/specs/kilo/elasticsearch-event-db.rst +++ b/specs/kilo/elasticsearch-event-db.rst @@ -109,7 +109,7 @@ Other deployer impact --------------------- The ElasticSearch storage driver is to have feature parity with the rest of -the currently available event driver backends. It will add a elasticsearch-py +the currently available event driver backends. It will add an elasticsearch-py client dependency should the driver be selected. Developer impact diff --git a/specs/kilo/hyper-v-disk-iops-metrics.rst b/specs/kilo/hyper-v-disk-iops-metrics.rst index 111504f..ed3d91f 100644 --- a/specs/kilo/hyper-v-disk-iops-metrics.rst +++ b/specs/kilo/hyper-v-disk-iops-metrics.rst @@ -38,7 +38,7 @@ Add the method 'inspect_disk_iops' in the Inspector and implement it in the HyperVInspector. The method will return a DiskIOPSStats object. The metric value will be fetched from Hyper-V VMs, located in the -Msvm_AggregationMetricValue object (further refered to as metric object) +Msvm_AggregationMetricValue object (further referred to as metric object) associated with the VMs. The metric object's 'MetricDefinitionId' must be equal to the 'Id' of Msvm_AggregationMetricDefinition object having the Caption 'Average Normalized Disk Throughput'. diff --git a/specs/kilo/kafka-publisher.rst b/specs/kilo/kafka-publisher.rst index ce6922a..9d86796 100644 --- a/specs/kilo/kafka-publisher.rst +++ b/specs/kilo/kafka-publisher.rst @@ -40,7 +40,7 @@ configure the kafka publisher plugin in pipeline.yaml file like:: - kafka://?topic= This way any application that is trying to consume streaming ceilometer -metrics via Kafka, can directly consume the ceilometer samples. For example +metrics via Kafka, can directly consume the ceilometer samples. For example, projects like monasca - https://github.com/stackforge/monasca-thresh can consume the ceilometer metrics that are published by the Ceilometer Kafka publisher. @@ -70,7 +70,7 @@ connections are under implementation. More information can be found in the refer Pipeline impact --------------- -Provide new options to specify kafka publiser as a ceilometer publisher +Provide new options to specify kafka publisher as a ceilometer publisher Other end user impact --------------------- diff --git a/specs/kilo/network-services-notifications.rst b/specs/kilo/network-services-notifications.rst index e7dfb51..15f5bee 100644 --- a/specs/kilo/network-services-notifications.rst +++ b/specs/kilo/network-services-notifications.rst @@ -44,7 +44,7 @@ The Neutron event topic names have the following pattern, * .delete.end For this implementation we will track the end events as those give us the most information -with respect to event payload. Also end events are more informative to track the existance +with respect to event payload. Also end events are more informative to track the existence of a resource and its usage over time. Alternatives diff --git a/specs/kilo/rally-check-gate.rst b/specs/kilo/rally-check-gate.rst index 8647376..d6cb2f5 100644 --- a/specs/kilo/rally-check-gate.rst +++ b/specs/kilo/rally-check-gate.rst @@ -89,7 +89,7 @@ Alternatives include: to implement what are called "SLA checks" via Rally. These allow a check job which can fail based on a severe degradation in performance. These are not being done now to first gain experience - with Rally as well as to comparmentalize work in actionable + with Rally as well as to compartmentalize work in actionable chunks. Data model impact diff --git a/specs/liberty/api-no-pipeline.rst b/specs/liberty/api-no-pipeline.rst index 05a1bc2..91a8bc3 100644 --- a/specs/liberty/api-no-pipeline.rst +++ b/specs/liberty/api-no-pipeline.rst @@ -128,7 +128,7 @@ Developer impact ---------------- This change may increase the amount of asynchrony in some tests. -For example the time between posting a sample and being able to +For example, the time between posting a sample and being able to retrieve that sample may become more unpredictable. As it is already unpredictable any tests which rely on immediate retrieval are bad anyway, so we should fix that. diff --git a/specs/liberty/delete-alarmhistory.rst b/specs/liberty/delete-alarmhistory.rst index 232d68e..3c1af11 100644 --- a/specs/liberty/delete-alarmhistory.rst +++ b/specs/liberty/delete-alarmhistory.rst @@ -47,7 +47,7 @@ alarm history. However their scale might be completely different, so the expiration frequency shouldn't be the same. -Therefore we will have separate TTL. +Therefore, we will have separate TTL. Data model impact diff --git a/specs/liberty/deprecate-existence-meters.rst b/specs/liberty/deprecate-existence-meters.rst index 3521a79..bbb46b5 100644 --- a/specs/liberty/deprecate-existence-meters.rst +++ b/specs/liberty/deprecate-existence-meters.rst @@ -113,7 +113,7 @@ Performance/Scalability Impacts ------------------------------- Less sample data. Less equivalent data in Events (because of trait filtering). -Events are a bit more scalable in it's design (maybe not the SQL backend). +Events are a bit more scalable in its design (maybe not the SQL backend). Other deployer impact --------------------- diff --git a/specs/liberty/mongodb-aggregation-pipeline.rst b/specs/liberty/mongodb-aggregation-pipeline.rst index 4171ec3..2195bb5 100644 --- a/specs/liberty/mongodb-aggregation-pipeline.rst +++ b/specs/liberty/mongodb-aggregation-pipeline.rst @@ -29,7 +29,7 @@ work and making user reports. Proposed change =============== -Add a implementation of method get_meter_statistics via MongoDB +Add an implementation of method get_meter_statistics via MongoDB aggregation pipeline framework. From MongoDB docs: @@ -66,7 +66,7 @@ This option allows to write intermediate staging data to temporary files. So, primary risks of this approach are a necessity of free space on disk and a slow performance of disk writing and reading. -Accordingly researches and MongoDB docs, the "$sort" command creates +Accordingly, researches and MongoDB docs, the "$sort" command creates the most amount of intermediate data for follow stages. So, in practice this stage prepares data whose size is close to new index size. In same time, the indexed fields sorting (like timestamp diff --git a/specs/liberty/pollsters-no-transform.rst b/specs/liberty/pollsters-no-transform.rst index 17e4ea5..277b8f7 100644 --- a/specs/liberty/pollsters-no-transform.rst +++ b/specs/liberty/pollsters-no-transform.rst @@ -4,9 +4,9 @@ http://creativecommons.org/licenses/by/3.0/legalcode -===================== -Pollsters No Tranform -===================== +======================= +Pollsters No Transform +======================= https://blueprints.launchpad.net/ceilometer/+spec/pollsters-no-transform @@ -30,7 +30,7 @@ Proposed change One way to resolve this complexity is to not do transformations in the pollsters. Instead when new samples are polled, format them as notifications -and push them onto the notification bus to be retreived by the notification +and push them onto the notification bus to be retrieved by the notification agent. The pipeline within that agent will then do any required transformations. Besides clarifying the focus of the polling agents it is likely this change diff --git a/specs/liberty/reload-file-based-pipeline-configuration.rst b/specs/liberty/reload-file-based-pipeline-configuration.rst index 53f3577..41482d5 100644 --- a/specs/liberty/reload-file-based-pipeline-configuration.rst +++ b/specs/liberty/reload-file-based-pipeline-configuration.rst @@ -93,7 +93,7 @@ Next steps for HA notification agent: Pros: Preferred and Easiest approach Cons: - It doesnt centralize the pipeline definition and runs the risk of agents + It doesn't centralize the pipeline definition and runs the risk of agents diverging on their pipeline definitions This means we're allowing any kind of error levels due to the fact file @@ -134,7 +134,7 @@ Alternatives 3. Use automated deployment tools - Puppet, Chef, Ansible to change pipeline definitions. While this automates changing pipeline definitions across - multiple agents, it doesnt bring the value-add of on-the-fly updates to the + multiple agents, it doesn't bring the value-add of on-the-fly updates to the agent, without incurring a restart of the daemons. Pros: diff --git a/specs/liberty/remove-web-eventlet.rst b/specs/liberty/remove-web-eventlet.rst index eab866e..ff44291 100644 --- a/specs/liberty/remove-web-eventlet.rst +++ b/specs/liberty/remove-web-eventlet.rst @@ -118,7 +118,7 @@ Ongoing maintainer: Work Items ---------- -* Seperate eventlet and non-eventlet commands into two different module +* Separate eventlet and non-eventlet commands into two different module directories, starting with the api module. * Compare and contrast performance of the API server with and without eventlet paying specific attention to the impact on accessing the storage layer. Mike diff --git a/specs/liberty/split-ceilometer-alarming.rst b/specs/liberty/split-ceilometer-alarming.rst index 340535c..445329f 100644 --- a/specs/liberty/split-ceilometer-alarming.rst +++ b/specs/liberty/split-ceilometer-alarming.rst @@ -10,7 +10,7 @@ Spliting Ceilometer alarming https://blueprints.launchpad.net/ceilometer/+spec/split-ceilometer-alarming -Ceilometer evolved from a a simple meter gathering component to a lot of +Ceilometer evolved from a simple meter gathering component to a lot of different component doing different things. The storage layer has been abstracted during Juno and Kilo and is going to be handled by Gnocchi. This spec proposes that the work continues so that the alarming subsystem of diff --git a/specs/mitaka/composite-threshold-rule-alarm.rst b/specs/mitaka/composite-threshold-rule-alarm.rst index 02d2e5c..a20ecd8 100644 --- a/specs/mitaka/composite-threshold-rule-alarm.rst +++ b/specs/mitaka/composite-threshold-rule-alarm.rst @@ -143,7 +143,7 @@ Data model impact ----------------- * The "threshold_rules" will be a dict that can be include multiple threshold - rules and the the key of the dict will be "and" or "or". But this has no + rules and the key of the dict will be "and" or "or". But this has no effect on data model, because the rules is stored as json in db. REST API impact diff --git a/specs/mitaka/event-to-sample-publisher.rst b/specs/mitaka/event-to-sample-publisher.rst index 25d028f..444d3ee 100644 --- a/specs/mitaka/event-to-sample-publisher.rst +++ b/specs/mitaka/event-to-sample-publisher.rst @@ -24,7 +24,7 @@ Proposed change =============== Add a new notifier publisher and use this publisher in event transformer, when there is a new transformed event from events pipeline -transformers, convert it to notification, send it back to ceilometer notificaton listener, +transformers, convert it to notification, send it back to ceilometer notification listener, the notification payload is generated from event traits, then the sample pipeline will convert and publish it into sample. diff --git a/specs/mitaka/only-support-sqlalchemy-in-aodh.rst b/specs/mitaka/only-support-sqlalchemy-in-aodh.rst index 2a68caa..ed7d911 100644 --- a/specs/mitaka/only-support-sqlalchemy-in-aodh.rst +++ b/specs/mitaka/only-support-sqlalchemy-in-aodh.rst @@ -114,7 +114,7 @@ Work Items In current cycle: -* send a email to do a user survey to get feedbacks about this change +* send an email to do a user survey to get feedbacks about this change * deprecate mongodb and Hbase storage support and log warning messages if config mongodb or Hbase as storage driver * add alarm data migration tool for migrating data from mongodb/Hbase to sql diff --git a/specs/mitaka/rolling-upgrades.rst b/specs/mitaka/rolling-upgrades.rst index b6d182a..fd78080 100644 --- a/specs/mitaka/rolling-upgrades.rst +++ b/specs/mitaka/rolling-upgrades.rst @@ -65,7 +65,7 @@ There are two upgrade paths to handle -- both require no code change: against same pipeline sets. 4. The new polling agent can be started alongside the old agent only if no new pollsters were added. If not, new polling agents must start only - in it's own partitioning group and poll only the new pollsters. After + in its own partitioning group and poll only the new pollsters. After all old agents are upgraded, the polling agents can be changed to poll both new pollsters AND the old ones. 5. API service management is handled by WSGI so there is only ever one