Post-installation Configuration

openATTIC Base Configuration

After all the required packages have been installed and a storage pool has been created, you need to perform the actual openATTIC configuration, by running oaconfig:

# oaconfig install

oaconfig install will start and enable a number of services, initialize the openATTIC database and scan the system for pools and volumes to include.

Changing the Default User Password

By default, oaconfig creates a local adminstrative user account openattic, with the same password.

As a security precaution, we strongly recommend to change this password immediately:

# oaconfig changepassword openattic
Changing password for user 'openattic'
Password: <enter password>
Password (again): <re-enter password>
Password changed successfully for user 'openattic'

Now, your openATTIC storage system can be managed via the user interface.

See Getting started for instructions on how to access the web user interface.

If you don’t want to manage your users locally, consult the chapter Configuring Authentication and Single Sign-On for alternative methods for authentication and authorization.

Installing additional openATTIC Modules

After installing openATTIC, you can install additional modules (openattic-module-<module-name>), by using your operating system’s native package manager, i.e.:

# apt-get install openattic-module-drbd # Debian/Ubuntu
# yum install openattic-module-btrfs # RHEL/CentOS

Note

Don’t forget to run oaconfig install after installing new modules.

Enabling Ceph Support in openATTIC

Note

Ceph support in openATTIC is currently developed against Ceph 10.2 aka “Jewel”. Older Ceph versions may not work as expected. If your Linux distribution ships an older version of Ceph (as most currently do), please either use the upstream Ceph package repositories or find an alternative package repository for your distribution that provides a version of Ceph that meets the requirements. Note that this applies to both the version of the Ceph tools installed on the openATTIC node as well as the version running on your Ceph cluster.

To set up openATTIC with Ceph you first have to copy the Ceph administrator keyring and configuration from your Ceph admin node to your local openATTIC system.

From your Ceph admin node, you can perform this step by using ceph-deploy (assuming that you can perform SSH logins from the admin node into the openATTIC host):

# ceph-deploy admin openattic.yourdomain.com

On the openATTIC node, you should then have the following files:

/etc/ceph/ceph.client.admin.keyring
/etc/ceph/ceph.conf

Note

Please ensure that these files are actually readable by the openATTIC user (openattic) and the Nagios/Icinga user account (usually nagios or icinga) that runs the related Nagios checks. In a default installation, these users are added to the group openattic, so it should be sufficient to make sure these files are either world-readable or owned and readable by this group:

# chgrp openattic /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring
# chmod g+r /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring

Alternatively, you can copy these files manually.

Note

openATTIC supports managing multiple Ceph clusters, provided they have different names and FSIDs. You can add another cluster by copying the cluster’s admin keyring and configuration into /etc/ceph using a different cluster name, e.g. development instead of the default name ceph:

/etc/ceph/development.client.admin.keyring
/etc/ceph/development.conf

The next step is to install the openATTIC Ceph module openattic-module-ceph on your system:

# apt-get install openattic-module-ceph
- or -
# yum install openattic-module-ceph

The packages should automatically install any additionally required packages. The last step is to recreate your openATTIC configuration:

# oaconfig install

DeepSea integration in openATTIC

Some openATTIC features, like Ceph iSCSI and RGW management, make use of the DeepSea REST API.

To enable the REST API of DeepSea you would have to issue the following command on the Salt master node:

salt-call state.apply ceph.salt-api

By default, openATTIC assumes that Salt master hostname is salt, API port is 8000 and API username is admin. If you need to change any of this default values, you should configure it in either /etc/default/openattic for Debian-based distributions or in /etc/sysconfig/openattic for RedHat-based distributions as well as SUSE Linux.

Available settings are:

SALT_API_HOST='salt'
SALT_API_PORT=8000
SALT_API_USERNAME='admin'
SALT_API_PASSWORD='admin'

Caution

Do not use spaces before or after the equal signs

Rados Gateway management features

If you want to enable the Rados Gateway management features, and you are using DeepSea, you just have to guarantee that the SALT-API is correctly configured (see DeepSea integration in openATTIC). In case you are not using DeepSea, you have to configure the Rados Gateway manually by editing either /etc/default/openattic for Debian-based distributions or /etc/sysconfig/openattic for RedHat-based distributions as well as SUSE Linux.

This is an example for the manually configured Rados Gateway credentials:

RGW_API_HOST="ceph-1"
RGW_API_PORT=80
RGW_API_SCHEME="http"
RGW_API_ACCESS_KEY="VFEG733GBY0DJCIV6NK0"
RGW_API_SECRET_KEY="lJzPbZYZTv8FzmJS5eiiZPHxlT2LMGOMW8ZAeOAq"

Note

If your Rados Gateway admin resource isn’t configured to use the default value admin (e.g. http://host:80/admin), you will need to also set the RGW_API_ADMIN_RESOURCE option appropriately.

You can obtain these credentials by issuing the radosgw-admin command like so:

radosgw-admin user info --uid=admin