Post-installation Configuration

openATTIC Base Configuration

After all the required packages have been installed and a storage pool has been created, you need to perform the actual openATTIC configuration, by running oaconfig:

# oaconfig install

oaconfig install will start and enable a number of services, initialize the openATTIC database and scan the system for pools and volumes to include.

Changing the Default User Password

By default, oaconfig creates a local adminstrative user account openattic, with the same password.

As a security precaution, we strongly recommend to change this password immediately:

# oaconfig changepassword openattic
Changing password for user 'openattic'
Password: <enter password>
Password (again): <re-enter password>
Password changed successfully for user 'openattic'

Now, your openATTIC storage system can be managed via the user interface.

See Getting started for instructions on how to access the web user interface.

If you don’t want to manage your users locally, consult the chapter Configuring Authentication and Single Sign-On for alternative methods for authentication and authorization.

Installing additional openATTIC Modules

After installing openATTIC, you can install additional modules (openattic-module-<module-name>), by using your operating system’s native package manager, i.e.:

# apt-get install openattic-module-drbd # Debian/Ubuntu
# yum install openattic-module-btrfs # RHEL/CentOS

Note

Don’t forget to run oaconfig install after installing new modules.

Enabling Ceph Support in openATTIC

Note

Ceph support in openATTIC is currently developed against Ceph 10.2 aka “Jewel”. Older Ceph versions may not work as expected. If your Linux distribution ships an older version of Ceph (as most currently do), please either use the upstream Ceph package repositories or find an alternative package repository for your distribution that provides a version of Ceph that meets the requirements. Note that this applies to both the version of the Ceph tools installed on the openATTIC node as well as the version running on your Ceph cluster.

To set up openATTIC with Ceph you first have to copy the Ceph administrator keyring and configuration from your Ceph admin node to your local openATTIC system.

From your Ceph admin node, you can perform this step by using ceph-deploy (assuming that you can perform SSH logins from the admin node into the openATTIC host):

# ceph-deploy admin openattic.yourdomain.com

On the openATTIC node, you should then have the following files:

/etc/ceph/ceph.client.admin.keyring
/etc/ceph/ceph.conf

Note

Please ensure that these files are actually readable by the openATTIC user (openattic) and the Nagios/Icinga user account (usually nagios or icinga) that runs the related Nagios checks. In a default installation, these users are added to the group openattic, so it should be sufficient to make sure these files are either world-readable or owned and readable by this group:

# chgrp openattic /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring
# chmod g+r /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring

Alternatively, you can copy these files manually.

Note

openATTIC supports managing multiple Ceph clusters, provided they have different names and FSIDs. You can add another cluster by copying the cluster’s admin keyring and configuration into /etc/ceph using a different cluster name, e.g. development instead of the default name ceph:

/etc/ceph/development.client.admin.keyring
/etc/ceph/development.conf

The next step is to install the openATTIC Ceph module openattic-module-ceph on your system:

# apt-get install openattic-module-ceph
- or -
# yum install openattic-module-ceph

The packages should automatically install any additionally required packages. The last step is to recreate your openATTIC configuration:

# oaconfig install

Rados Gateway management features

If you want to enable the Rados Gateway management features, you will need to configure the credentials manually. You can do so in the distribution specific configuration files in either /etc/default/openattic for Debian-based distributions or in /etc/sysconfig/openattic for RedHat-based distributions as well as SUSE Linux. openATTIC supports both, retrieving the credentials from DeepSea or having directly configured credentials for the Rados Gateway.

Caution

The two configuration files mentioned above are used in Python as well as Bash. Therefore the files need to be in a format which Bash can understand and thus it’s not possible to have spaces before or after the equal signs!

This is an example for the directly configured Rados Gateway credentials:

RGW_API_HOST="ceph-1"
RGW_API_PORT=80
RGW_API_SCHEME="http"
RGW_API_ACCESS_KEY="VFEG733GBY0DJCIV6NK0"
RGW_API_SECRET_KEY="lJzPbZYZTv8FzmJS5eiiZPHxlT2LMGOMW8ZAeOAq"

Note

If your Rados Gateway admin resource isn’t configured to use the default value admin (e.g. http://host:80/admin), you will need to also set the RGW_API_ADMIN_RESOURCE option appropriately.

You can obtain these credentials by issuing the radosgw-admin command like so:

radosgw-admin user info --uid=admin

On the other hand, if you have a Ceph cluster managed or deployed by DeepSea, openATTIC is capable of obtaining the Rados Gateway credentials by using DeepSeas’ REST API.

To enable the REST API of DeepSea you would have to issue the following command on the Salt master node:

salt-call state.apply ceph.salt-api

Afterwards, you would need to set the following variables to their corresponding values for openATTIC to be able to talk to DeepSea and obtain the Rados Gateway credentials:

SALT_API_HOST="salt"
SALT_API_PORT=8000
SALT_API_USERNAME="admin"
SALT_API_PASSWORD="admin"