Merge "Add Section 4 for FirstApp for Fog"

This commit is contained in:
Jenkins 2015-11-22 22:33:21 +00:00 committed by Gerrit Code Review
commit f71e9fca15
3 changed files with 281 additions and 89 deletions

@ -0,0 +1,102 @@
# step-1
require 'fog'
auth_username = 'your_auth_username'
auth_password = 'your_auth_password'
auth_url = 'http://controller:5000'
project_name = 'your_project_name_or_id'
region_name = 'your_region_name'
swift = Fog::Storage.new({
:provider => 'openstack',
:openstack_auth_url => auth_url + '/v2.0/tokens',
:openstack_username => auth_username,
:openstack_tenant => project_name,
:openstack_api_key => auth_password,
:openstack_region => region_name
})
# step-2
container_name = 'fractals'
container = swift.directories.create :key => container_name
# step-3
swift.directories
# step-4
file_path = 'goat.jpg'
object_name = 'an amazing goat'
container = swift.directories.get container_name
object = container.upload_object(file_path=file_path, object_name=object_name)
object = container.files.create :key => object_name, :body => File.open file_path
# step-5
objects = container.files
puts objects
# step-6
object = container.files.get(object_name)
puts object
# step-7
require 'digest/md5'
file = File.open('goat.jpg', "rb")
contents = file.read
file.close
digest = Digest::MD5.hexdigest(contents)
puts digest
# step-8
object.destroy
# step-9
objects = container.files
puts objects
# step-10
container_name = 'fractals'
container = swift.directories.get container_name
# step-11
require "net/https"
require "uri"
require "json"
endpoint = 'http://IP_API_1'
uri = URI.parse('%s/v1/fractal?results_per_page=-1' % endpoint)
http = Net::HTTP.new(uri.host, uri.port)
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)
data = json.parse(response.body)
for fractal in data['objects']
request = Net::HTTP::Get.new('%s/fractal/%s' % (endpoint, fractal['uuid']), stream=True)
response = http.request(request)
container.files.create :key => fractal['uuid'], :body => response.body
end
for object in container.files:
puts object
# step-12
for object in container.files:
object.destroy
container.destroy
# step-13
file_path = 'goat.jpg'
object_name = 'backup_goat.jpg'
metadata = {'description' => 'a funny goat', 'created' => '2015-06-02'}
container.files.creare :key => object_name, :body File.open file_path, :metadata => metadata
# step-14
# XXX TODOZ TBC
chunk_size = 33554432
File.open(file_path) do |chunk|
end
swift.put_object_manifest(container_name, object_name, 'X-Object-Manifest' => container_name + "/" + object_name "/")
# step-15

@ -1,6 +1,6 @@
================= ===============
Making it durable Make it durable
================= ===============
.. todo:: https://github.com/apache/libcloud/pull/492 .. todo:: https://github.com/apache/libcloud/pull/492
@ -11,66 +11,59 @@ Making it durable
.. todo:: Large object support in Swift .. todo:: Large object support in Swift
http://docs.openstack.org/developer/swift/overview_large_objects.html http://docs.openstack.org/developer/swift/overview_large_objects.html
This section introduces object storage. `OpenStack Object Storage This section introduces object storage.
<http://www.openstack.org/software/openstack-storage/>`_ (code-named
swift) is open source software for creating redundant, scalable data
storage using clusters of standardized servers to store petabytes of
accessible data. It is a long-term storage system for large amounts
of static data that can be retrieved, leveraged, and updated. Access
is via an API, not through a file-system like more traditional
storage.
There are a two key concepts to understand in the Object Storage `OpenStack Object Storage <http://www.openstack.org/software/openstack-storage/>`_
API. The Object Storage API is organized around two types of entities: (code-named swift) is open-source software that enables you to create
redundant, scalable data storage by using clusters of standardized servers to
store petabytes of accessible data. It is a long-term storage system for large
amounts of static data that you can retrieve, leverage, and update. Unlike
more traditional storage systems that you access through a file system, you
access Object Storage through an API.
* Objects The Object Storage API is organized around objects and containers.
* Containers
Similar to the Unix programming model, an object is a "bag of bytes" Similar to the UNIX programming model, an object, such as a document or an
that contains data, such as documents and images. Containers are used image, is a "bag of bytes" that contains data. You use containers to group
to group objects. You can make many objects inside a container, and objects. You can place many objects inside a container, and your account can
have many containers inside your account. have many containers.
If you think about how you traditionally make what you store durable, If you think about how you traditionally make what you store durable, you
very quickly you should come to the conclusion that keeping multiple quickly conclude that keeping multiple copies of your objects on separate
copies of your objects on separate systems is a good way to do systems is a good way strategy. However, keeping track of those multiple
that. However, keeping track of multiple copies of objects is a pain, copies is difficult, and building that into an app requires complicated logic.
and building that into an app requires a lot of logic. OpenStack
Object Storage does this automatically for you behind-the-scenes - OpenStack Object Storage automatically replicates each object at least twice
replicating each object at least twice before returning 'write before returning 'write success' to your API call. A good strategy is to keep
success' to your API call. It will always work to ensure that there three copies of objects, by default, at all times, replicating them across the
are three copies of your objects (by default) at all times - system in case of hardware failure, maintenance, network outage, or another
replicating them around the system in case of hardware failure, kind of breakage. This strategy is very convenient for app creation. You can
maintenance, network outage or any other kind of breakage. This is just dump objects into object storage and not worry about the additional work
very convenient for app creation - you can just dump objects into that it takes to keep them safe.
object storage and not have to care about any of this additional work
to keep them safe.
Using Object Storage to store fractals Use Object Storage to store fractals
-------------------------------------- ------------------------------------
The Fractals app currently uses the local file system on the instance The Fractals app currently uses the local file system on the instance to store
to store the images it generates. This is not scalable or durable, for the images that it generates. For a number of reasons, this approach is not
a number of reasons. scalable or durable.
Because the local file system is ephemeral storage, if the instance is Because the local file system is ephemeral storage, the fractal images are
terminated, the fractal images will be lost along with the instance. lost along with the instance when the instance is terminated. Block-based
Block based storage, which we will discuss in :doc:`/block_storage`, storage, which the :doc:`/block_storage` section discusses, avoids that
avoids that problem, but like local file systems, it requires problem, but like local file systems, it requires administration to ensure
administration to ensure that it does not fill up, and immediate that it does not fill up, and immediate attention if disks fail.
attention if disks fail.
The Object Storage service manages many of these tasks that normally The Object Storage service manages many of the tasks normally managed by the
would require the application owner to manage them, and presents a application owner. The Object Storage service provides a scalable and durable
scalable and durable API that you can use for the fractals app, API that you can use for the fractals app, eliminating the need to be aware of
without having to be concerned with the low level details of how the the low level details of how objects are stored and replicated, and how to
objects are stored and replicated, and growing the storage pool. In grow the storage pool. Object Storage handles replication for you. It stores
fact, Object Storage handles replication intrinsically, storing multiple multiple copies of each object. You can use the Object Storage API to return
copies of each object and returning one of them on demand using the an object, on demand.
API.
First, let's learn how to connect to the Object Storage endpoint: First, learn how to connect to the Object Storage endpoint:
.. only:: dotnet .. only:: dotnet
@ -78,7 +71,10 @@ First, let's learn how to connect to the Object Storage endpoint:
.. only:: fog .. only:: fog
.. warning:: This section has not yet been completed for the fog SDK. .. literalinclude:: ../samples/fog/durability.rb
:start-after: step-1
:end-before: step-2
.. only:: jclouds .. only:: jclouds
@ -123,6 +119,18 @@ First, let's learn how to connect to the Object Storage endpoint:
To begin to store objects, we must first make a container. To begin to store objects, we must first make a container.
Call yours :code:`fractals`: Call yours :code:`fractals`:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-2
:end-before: step-3
You should see output such as:
.. code-block:: ruby
TBC
.. only:: libcloud .. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py .. literalinclude:: ../samples/libcloud/durability.py
@ -138,6 +146,18 @@ Call yours :code:`fractals`:
You should now be able to see this container appear in a listing of You should now be able to see this container appear in a listing of
all containers in your account: all containers in your account:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-3
:end-before: step-4
You should see output such as:
.. code-block:: ruby
TBC
.. only:: libcloud .. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py .. literalinclude:: ../samples/libcloud/durability.py
@ -154,6 +174,12 @@ The next logical step is to upload an object. Find a photo of a goat
on line, name it :code:`goat.jpg`, and upload it to your on line, name it :code:`goat.jpg`, and upload it to your
:code:`fractals` container: :code:`fractals` container:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-4
:end-before: step-5
.. only:: libcloud .. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py .. literalinclude:: ../samples/libcloud/durability.py
@ -164,6 +190,34 @@ List objects in your :code:`fractals` container to see if the upload
was successful. Then, download the file to verify that the md5sum is was successful. Then, download the file to verify that the md5sum is
the same: the same:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-5
:end-before: step-6
::
TBC
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-6
:end-before: step-7
::
TBC
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-7
:end-before: step-8
::
7513986d3aeb22659079d1bf3dc2468b
.. only:: libcloud .. only:: libcloud
.. literalinclude:: ../samples/libcloud/durability.py .. literalinclude:: ../samples/libcloud/durability.py
@ -192,8 +246,13 @@ the same:
7513986d3aeb22659079d1bf3dc2468b 7513986d3aeb22659079d1bf3dc2468b
Finally, clean up by deleting the test object:
Finally, let's clean up by deleting our test object: .. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-8
:end-before: step-9
.. only:: libcloud .. only:: libcloud
@ -201,9 +260,9 @@ Finally, let's clean up by deleting our test object:
:start-after: step-8 :start-after: step-8
:end-before: step-9 :end-before: step-9
.. note:: You need to pass in objects to the delete commands, not object names. .. note:: You must pass in objects and not object names to the delete commands.
Now there should be no more objects be available in the container :code:`fractals`. Now, no more objects are available in the :code:`fractals` container.
.. literalinclude:: ../samples/libcloud/durability.py .. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-9 :start-after: step-9
@ -213,14 +272,19 @@ Finally, let's clean up by deleting our test object:
[] []
Backup the Fractals from the database on the Object Storage Back up the Fractals from the database on the Object Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
So let's now use the knowledge from above to backup the images of the Back up the Fractals app images, which are currently stored inside the
Fractals app, stored inside the database right now, on the Object database, on Object Storage.
Storage.
Use the :code:`fractals`' container from above to put the images in: Place the images in the :code:`fractals`' container:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-10
:end-before: step-11
.. only:: libcloud .. only:: libcloud
@ -228,8 +292,16 @@ Use the :code:`fractals`' container from above to put the images in:
:start-after: step-10 :start-after: step-10
:end-before: step-11 :end-before: step-11
Next, we backup all of our existing fractals from the database to our Next, back up all existing fractals from the database to the swift container.
swift container. A simple for loop takes care of that: A simple `for` loop takes care of that:
.. note:: Replace :code:`IP_API_1` with the IP address of the API instance.
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-11
:end-before: step-12
.. only:: libcloud .. only:: libcloud
@ -264,9 +336,14 @@ Extra features
Delete containers Delete containers
~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~
One call we did not cover and that you probably need to know is how To delete a container, make sure that you have removed all objects from the
to delete a container. Ensure that you have removed all objects from container before running this script. Otherwise, the script fails:
the container before running this script. Otherwise, the script fails:
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-12
:end-before: step-13
.. only:: libcloud .. only:: libcloud
@ -279,13 +356,18 @@ the container before running this script. Otherwise, the script fails:
Add metadata to objects Add metadata to objects
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
You can also do advanced things like uploading an object with You can complete advanced tasks such as uploading an object with metadata, as
metadata, such as in following example. For more information, see the shown in following example. For more information, see the documentation for
documentation for your SDK. This option also uses a bit stream to your SDK. This option also uses a bit stream to upload the file, iterating bit
upload the file, iterating bit by bit over the file and passing those by bit over the file and passing those bits to Object Storage as they come.
bits to Object Storage as they come. Compared to loading the entire Compared to loading the entire file in memory and then sending it, this method
file in memory and then sending it, this method is more efficient, is more efficient, especially for larger files.
especially for larger files.
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-13
:end-before: step-14
.. only:: libcloud .. only:: libcloud
@ -298,18 +380,22 @@ especially for larger files.
Large objects Large objects
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
For efficiency, most Object Storage installations treat large objects For efficiency, most Object Storage installations treat large objects,
(say, :code:`> 5GB`) differently than smaller objects. :code:`> 5GB`, differently than smaller objects.
.. only:: fog
.. literalinclude:: ../samples/fog/durability.rb
:start-after: step-14
:end-before: step-15
.. only:: libcloud .. only:: libcloud
If you are working with large objects, use the If you work with large objects, use the :code:`ex_multipart_upload_object`
:code:`ex_multipart_upload_object` call instead of the simpler call instead of the simpler :code:`upload_object` call. The call splits
:code:`upload_object` call. Behind the scenes, the call splits the the large object into chunks and creates a manifest so that the chunks can
large object into chunks and creates a special manifest so that be recombined on download. Change the :code:`chunk_size` parameter, in
the chunks can be recombined on download. Alter the bytes, to a value that your cloud can accept.
:code:`chunk_size` parameter (in bytes) according to what your
cloud can accept.
.. literalinclude:: ../samples/libcloud/durability.py .. literalinclude:: ../samples/libcloud/durability.py
:start-after: step-14 :start-after: step-14
@ -322,11 +408,15 @@ Next steps
You should now be fairly confident working with Object Storage. You You should now be fairly confident working with Object Storage. You
can find more information about the Object Storage SDK calls at: can find more information about the Object Storage SDK calls at:
.. only:: fog
https://github.com/fog/fog/blob/master/lib/fog/openstack/docs/storage.md
.. only:: libcloud .. only:: libcloud
https://libcloud.readthedocs.org/en/latest/storage/api.html https://libcloud.readthedocs.org/en/latest/storage/api.html
Or, try one of these steps in the tutorial: Or, try one of these tutorial steps:
* :doc:`/block_storage`: Migrate the database to block storage, or use * :doc:`/block_storage`: Migrate the database to block storage, or use
the database-as-a-service component. the database-as-a-service component.

@ -202,7 +202,7 @@ instances. After also attaching this router to our internal networks,
we will allocate floating IPs from the public network for instances we will allocate floating IPs from the public network for instances
which need to be accessed from the Internet. which need to be accessed from the Internet.
Let's just confirm that we have a public network by listing the Confirm that we have a public network by listing the
networks our tenant has access to. The public network does not have to networks our tenant has access to. The public network does not have to
be named public - it could be 'external', 'net04_ext' or something be named public - it could be 'external', 'net04_ext' or something
else - the important thing is it exists and can be used to reach the else - the important thing is it exists and can be used to reach the
@ -529,7 +529,7 @@ nodes. The end result will be the following network topology:
libcloud support added 0.14: libcloud support added 0.14:
https://developer.rackspace.com/blog/libcloud-0-dot-14-released/ https://developer.rackspace.com/blog/libcloud-0-dot-14-released/
Let's start by looking at what's already in place. Start by looking at what is already in place.
:: ::
@ -686,7 +686,7 @@ You should be able to see them in the member list:
| f3ba0605-4926-4498-b86d-51002892e93a | 203.0.113.22 | 80 | 1 | True | ACTIVE | | f3ba0605-4926-4498-b86d-51002892e93a | 203.0.113.22 | 80 | 1 | True | ACTIVE |
+--------------------------------------+--------------+---------------+--------+----------------+--------+ +--------------------------------------+--------------+---------------+--------+----------------+--------+
Now let's create a health monitor that will ensure that members of the Now, create a health monitor that will ensure that members of the
load balancer pool are active and able to respond to requests. If a load balancer pool are active and able to respond to requests. If a
member in the pool dies or is unresponsive, the member is removed from member in the pool dies or is unresponsive, the member is removed from
the pool so that client requests are routed to another active member. the pool so that client requests are routed to another active member.
@ -751,7 +751,7 @@ And confirm it is in place:
| f0bcb66e-5eeb-447b-985e-faeb67540c2f | myvip | 203.0.113.63 | HTTP | True | ACTIVE | | f0bcb66e-5eeb-447b-985e-faeb67540c2f | myvip | 203.0.113.63 | HTTP | True | ACTIVE |
+--------------------------------------+-------+--------------+----------+----------------+--------+ +--------------------------------------+-------+--------------+----------+----------------+--------+
Now let's look at the big picture. Now, look at the big picture.
Final result Final result
~~~~~~~~~~~~ ~~~~~~~~~~~~