Installation and configuration

Installation

Important

First need to do setting up the environment. All commands are executed only from superuser.

Mode superuser:

sudo -i

Important

Installation is performed on control node.

  1. Save the list of previously installed packages before starting the installation, this will allow to painlessly restore the system in case of damage. Run the following commands to do this:

    mkdir -p /tmp/rollback/clouds
    pip3 freeze > /tmp/rollback/clouds/pip_before.txt
    

    After that, the directory /tmp/rollback/clouds will contain the file pip_before.txt with a list of installed applications.

  2. Also save migration versions:

    openstack aos db list -n clouds > /tmp/rollback/clouds/migrations.txt
    

    where:

    • /tmp/rollback/clouds/ is a file directory;
    • migrations.txt is name of the file with migration versions.
  3. Install Clouds package:

    pip3 install clouds
    
  4. Save the list of installed packages after installation to be able to roll back changes:

    pip3 freeze > /tmp/rollback/clouds/pip_after.txt
    

Configuration

  1. Perform initial configuration of the module:

    openstack aos configure -n clouds
    
  2. Create directory for logs with correct permissions:

    mkdir -p /var/log/aos/clouds
    chown -R aos:aos /var/log/aos/clouds
    
  3. Copy sample configuration file, if using non-standard parameters, edit them (for details see Configuration file):

    cp /etc/aos/clouds.conf.example /etc/aos/clouds.conf
    
  4. Create database using MySQL as an example, configure rights, database type and other parameters:

    # Login to the database using the root password
    mysql -uroot -p
    # Create clouds database
    CREATE DATABASE clouds;
    # Give the user permission to read, edit, perform any actions on all clouds database tables
    GRANT ALL PRIVILEGES ON clouds.* TO 'aos'@'localhost' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON clouds.* TO 'aos'@'%' IDENTIFIED BY 'password';
    # Exit the database
    
  5. Edit the section [database] of config file etc/aos/clouds.conf, for example:

    [database]
    url = mysql+pymysql://aos:password@tst.stand.loc:3306/clouds?charset=utf8
    
  6. Migrate database:

    openstack aos db migrate -n clouds
    
  7. Create user in OpenStack for API services:

    openstack user create --domain default --project service --project-domain default --password password --or-show aos
    
  8. Assign user service role:

    openstack role add --user aos --user-domain default --project service --project-domain default service
    
  9. Enable and start systemd services:

    systemctl daemon-reload
    systemctl enable aos-clouds-api.service aos-clouds-worker.service
    systemctl start aos-clouds-api.service aos-clouds-worker.service
    
  10. Create Clouds API service:

    openstack service create --name clouds --description "Clouds Service" clouds
    
  11. Create endpoints:

    openstack endpoint create --region RegionOne clouds internal http://controller:9366
    openstack endpoint create --region RegionOne clouds admin http://controller:9366
    openstack endpoint create --region RegionOne clouds public http://controller:9366
    

Connecting drivers

Connecting the Nova Compute driver

Driver functionality

Driver functionality includes creating and deleting instances in the AWS cloud, enabling and disabling instances in the AWS cloud.

Nova Compute driver connecting order

  1. Install clouds-drivers package on compute node:

    pip3 install clouds-drivers
    
  2. Set up driver in Nova config file /etc/nova/nova.conf:

    * AWS::
    
        [DEFAULT]
        compute_driver = clouds_drivers.nova.virt.ec2.EC2Driver
    
    * Yandex::
    
        [DEFAULT]
        compute_driver = clouds_drivers.nova.virt.yandex.driver.YandexDriver
    
  3. Restart nova-compute service:

    # Debian:
    systemctl restart nova-compute.service
    
  4. Set up scheduler filter in the Nova config file /etc/nova/nova.conf on the control node:

    [filter_scheduler]
    available_filters = nova.scheduler.filters.all_filters
    available_filters = clouds_drivers.nova.scheduler.filter.CloudProjectFilter
    enabled_filters = AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,CloudProjectFilter
    
  5. Restart nova-scheduler servic on the control node:

    # Debian:
    systemctl restart nova-scheduler.service
    

Cinder Volume driver connecting

Driver functionality

The driver’s functionality includes creating and deleting volumes in the AWS cloud, connecting volumes to instances, disconnecting from instances created in the AWS cloud.

Cinder Volume driver connecting order

  1. Install clouds-drivers package on node with cinder-volume service:

    pip3 install clouds-drivers
    
  2. Set up and configure driver in the Cinder config file /etc/cinder/cinder.conf:

    * AWS::
    
        [DEFAULT]
        enabled_backends = ebs
    
        [ebs]
        volume_driver = clouds_drivers.cinder.volume.drivers.aws.ebs.EBSDriver
        volume_backend_name = ebs
    
    * Yandex::
    
        [DEFAULT]
        enabled_backends = yandex
    
        [yandex]
        volume_driver = clouds_drivers.cinder.volume.drivers.yandex.VolumeDriver
        volume_backend_name = yandex
    
  3. Restart cinder-volume service:

    # Debian:
    systemctl restart cinder-volume.service
    
  4. Set up and configure the scheduler filter in the Cinder config file /etc/cinder/cinder.conf:

    [DEFAULT]
    scheduler_default_filters = AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter,CloudBackendsFilter
    
  5. Restart cinder-scheduler service:

    # Debian:
    systemctl restart cinder-scheduler.service
    

Glance Store driver connection

Driver functionality

Driver functionality includes registering AWS images with OpenStack, launching from loaded instances images, and uploading custom images to AWS.

Glance Store driver connecting order

  1. Install clouds-drivers package on control node:

    pip3 install clouds-drivers
    
  2. Set up and configure driver in the Glance API config file /etc/glance/glance-api.conf:

    * AWS::
    
        [DEFAULT]
        enabled_backends = file_store:file, http_store:http, aws_store:aws
        show_multiple_locations = true
    
        [aws_store]
        store_description = "AWS images store"
        bucket = "s3_bucket_name_for_image_import"
    
        [glance_store]
        default_backend = file_store
    
    
    * Yandex::
    
        [DEFAULT]
        enabled_backends = file_store:file, http_store:http, yandex_store:yandex
        show_multiple_locations = true
    
        [glance_store]
        default_backend = file_store
    
  3. It is necessary to configure access to object storage in the Glance API config file /etc/glance/glance-api.conf:

    • AWS:

      [aws_store]
      store_description = "AWS images store"
      bucket = "s3_bucket_name_for_image_import"
      
    • Yandex:

      [yandex_store]
      store_description = "Yandex images store"
      bucket = "s3_bucket_name_for_image_import"
      region_name = ru-central1
      access_key_id = gUC7Dgk2ckZJTJ-2G7fm
      secret_access_key = m0BLkc8smdvqrQvuZzjxSEQoxLYkxxUxjAawcBfA
      

    where access_key_id and secret_access_key are ID and key of the static access key to Object Storage, bucket is name of the bucket for loading the image.

  4. glance-api service restart:

    # Debian:
    systemctl restart glance-api.service
    

Neutron driver connection

Driver (network) functionality

Driver’s functionality includes creating and deleting project networks. The network must have no more CIDR /16.

Note

VPC with CIDR / 16 will be created in AWS when creating a network in OpenStack, and a network specified in OpenStack with the desired CIDR will be created in it.

Note

When creating a subnet, you should select an address pool from x.x.x.4 - x.x.x.254 range . This is because AWS reserves IP addresses from .1 to .3. More details is here.

Note

Creation of provider networks is not supported.

Note

Protocol IPv6 is not supported by AWS VPC.

Driver functionality (routers and floating IPs)

Driver’s functionality includes creating and removing routers and floating IPs.

Creating a router in OpenStack creates an Internet Gateway (IG) in AWS without binding it to any VPC. Adding an interface to the router associates the VPC with this Internet Gateway and also adds the default route 0.0.0.0 / 0 to it.

Note

Adding an interface from different subnet will not work as IG can only be associated with one VPC.

For Floating Ip to work correctly, it is need to manually create an external network in OpenStack that includes the entire pool of external addresses (elastic ip) allocated in AWS or Yandex.Cloud.

Important

If cloud (AWS or Yandex.Cloud) already has dedicated elastic ip, then the external network created in OpenStack should not distribute these IP addresses.

Neutron driver connecting order

  1. Install clouds-drivers package on control node:

    pip3 install clouds-drivers
    
  2. Set up and configure driver in Neutron config file /etc/neutron/neutron.conf. Replace the service plugin router with aws_router. Specify the path api_extensions_path to <clouds_drivers>\neutron\extensions:

    Example:
    
    [DEFAULT]
    service_plugins = aws_router,qos
    core_plugin = ml2
    api_extensions_path = /usr/local/lib/python3.7/dist-packages/clouds_drivers/neutron/extensions/
    
  3. Set up and configure driver in Neutron config file /etc/neutron/plugins/ml2/ml2_conf.ini. Add extention subnet_az to the list of extension_drivers parameter values. Add aws to the top of mechanism_drivers list. Configure rest of the parameters:

    Example:
    
    [ml2]
    extension_drivers = port_security,qos,subnet_az
    type_drivers = local,flat
    tenant_network_types = local
    mechanism_drivers = aws,openvswitch,l2population
    
    [ml2_type_flat]
    flat_networks = *
    
  4. Neutron services restart:

    # Debian:
    systemctl restart neutron-*.service
    

Note

It is need to create new network and subnet for it in the project to work with AWS after configuring the driver.

Correspondence table of actions in OpenStack and AWS

Neutron AWS
1 Network creating None
2 Network with < /16 creating VPC with /16 and subnets with a given CIDR creating”
3 Router creating Internet Gateway creating
4 Adding gateway to router None
5 Adding interface to router Adding VPC to Internet Gateway
6 Adding Floating IP to instance Adding Elastic IP to virtual Machine
7 Deleting floating IP Deleting elastic IP
8 Deleting subnet Deleting subnet in VPC, deleting VPC in case of last subnet
9 Deleting network None
10 Deleting router interface Deleting VPC with Internet Gateway
11 Deleting router Deleting Internet Gateway

Configuring project integration with AWS Cloud

  1. Create project to integrate with the AWS cloud:

    openstack project create aws
    
  2. Assign required roles:

    openstack role add --project aws --user admin admin
    openstack role add --project aws --user admin creator
    openstack role add --project aws --user cinder observer
    openstack role add --project aws --user glance observer
    openstack role add --project aws --user nova observer
    openstack role add --project aws --user neutron observer
    
  3. Set up project for cloud integration:

    openstack aos clouds add aws --project aws --access-key "AKIAIOSFODNN7EXAMPLE" --secret-key "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" --region us-east-2 --az us-east-2a
    
  4. Register cloud images to OpenStack:

    openstack aos clouds images register --project aws --cloud-image-id ami-0f7919c33c90f5b58
    
  5. Register cloud flavors to OpenStack:

    openstack aos clouds flavors register --project aws
    

Configuring project integration with Yandex.Cloud

  1. Create project to integrate with Yandex.Cloud:

    openstack project create yandex
    
  2. Assign the required roles:

    openstack role add --project yandex --user admin admin
    openstack role add --project yandex --user admin creator
    openstack role add --project yandex --user cinder observer
    openstack role add --project yandex --user glance observer
    openstack role add --project yandex --user nova observer
    openstack role add --project yandex --user neutron observer
    
  3. Set up project for cloud integration:

    openstack aos clouds add yandex --project yandex --service-account-id aaaaaaaaaaaaaaaaaaaa --private-key private.key --key-id bbbbbbbbbbbbbbbbbbbb --folder-id cccccccccccccccccccc --az ru-central1-a
    

Configuration file

Note

Config file allows to override sections and parameters of common file aos.conf for specific module.

Note

There are no lines with the level logging by default in the file clouds.conf.example, it is specified if necessary. Level logging is set by default in the general configuration file. More information about the configuration files can be found in the corresponding section.

Configuration file is presented in ini format and consists of the following sections and parameters:

Section Parameter Description Default value
api logfile aos-clouds-api service log file path.  
database url URL to access the database. mysql+pymysql:/ /aos:password@l ocalhost:3306/c louds?charset=u tf8

Configuration file example

[database]
url = mysql+pymysql://aos:password@localhost:3306/clouds?charset=utf8

Important

When changing the parameters of the configuration file to take effect, it is need to perform the procedure described in the section «Updating the config file».

Recovery plan

Roll back if the Clouds plug-in installation or update fails:

  1. Compare versions of migrations in /tmp/rollback/clouds/migrations.txt file with current. If there are differences, migrate to the previous version. Migration example:

    openstack aos db list -n clouds
    openstack aos db migrate -n clouds --migration 2
    
  2. Revert to the previous state of the packages:

    cd /tmp/rollback/clouds
    diff --changed-group-format='%>' --unchanged-group-format='' pip_before.txt pip_after.txt > pip_uninstall.txt
    diff --changed-group-format='%<' --unchanged-group-format='' pip_before.txt pip_after.txt > pip_install.txt
    pip3 uninstall -r pip_uninstall.txt
    pip3 install -r pip_install.txt