Skip to content
Jan 24 / Greg

“Build And Replace” Linux Migration Via Ansible, Ascender, AWX, AAP

Migrating from one Linux major version to another never seems to be a simple task, but through the magic of automation, it can be a lot simpler and reproducible. I’m going to cover the Ansible playbooks I created to do the work, then I’ll execute it using our enterprise automation platform called Ascender.

Our recommended method is to:
– Backup configuration and data from the old server
– Provision a brand new server with the required apps
– Restore configurations and data to the new server
– Test services to the new server
– Sunset the old server

Video Demo

Playbooks

First, I’m using resources from the community.general collection found here. I actually have a version of it included in my git repository.

All of my playbooks can be found here in my git repository.

I’ll cover some of the playbooks here… mostly discussing the highlights. The discover-backup.yml playbook is the first playbook run:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
- name: Discover/backup hosts to be migrated
  hosts: migration-hosts
  gather_facts: false
  vars:
    # The host to store backup info to
    backup_storage: backup-storage
 
    # The location on the backup host to store info
    backup_location: /tmp/migration
  tasks:
  - name: Execute rpm to get list of installed packages
    ansible.builtin.command: rpm -qa --qf "%{NAME} %{VERSION}-%{RELEASE}\n"
    register: rpm_query
 
  - name: Populate service facts - look for running services
    ansible.builtin.service_facts:
 
  # - name: Print service facts
  #   ansible.builtin.debug:
  #     var: ansible_facts.services
 
  - name: Create backup directory on backup server - unique for each host
    ansible.builtin.file:
      path: "{{ backup_location }}/{{ inventory_hostname }}"
      state: directory
      mode: '0733'
    delegate_to: "{{ backup_storage }}"
 
  # - name: Backup groups
  #   ansible.builtin.include_tasks:
  #     file: group-backup.yml
 
  - name: Backup Apache when httpd is installed and enabled
    when: item is search('httpd ') and ansible_facts.services['httpd.service'].status == 'enabled'
    ansible.builtin.include_tasks:
      file: apache-backup.yml 
    loop: "{{ rpm_query.stdout_lines }}"

In the above, the first task I run uses the RPM command to gather information on all of the installed packages. Generally, I prefer to use a purpose-built module if one exists. In this instance, the ansible.builtin.package_facts module is designed to do this, but I found it didn’t always report correctly for Centos7 servers, so I went with the RPM command as it always works. This list of apps will be used towards the bottom.

Next, I create a directory for each host on a backup server. This will be the repository for all of my configs and data backed up from the old server.

The last task is where the real work happens. I loop over the list of the installed packages on the server and check if one is the Apache service and if it is enabled. If those conditions are met, it will pull in the apache-backup.yml task file. This task file is something I created to backup things from my environment. If I had FTP services on some of my servers, I would also need an ftp-backup task file and an additional matching task, just like the apache-backup file.

The apache-backup.yml file is actually fairly simple:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# Task file for backing up apache
 
# Backup apache config files
- name: Create an archive of the config files
  community.general.archive:
    path: /etc/httpd/con*
    dest: "/tmp/{{ inventory_hostname }}-httpd.tgz"
 
- name: Copy apache config files to ansible server
  ansible.builtin.fetch:
    src: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    flat: true # Changes default fetch so it will save directly in destination
 
- name: Copy config archive to backup server from local ansible server
  ansible.builtin.copy:
    src: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    dest: "{{ backup_location }}/{{ inventory_hostname }}/{{ inventory_hostname }}-httpd.tgz"
  delegate_to: "{{ backup_storage }}"
 
# Backup apache data files
- name: Create an archive of the data directories
  community.general.archive:
    path: /var/www
    dest: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
 
- name: Copy apache data files to ansible server
  ansible.builtin.fetch:
    src: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    flat: true # Changes default fetch so it will save directly in destination
 
- name: Copy data archive to backup server from local ansible server
  ansible.builtin.copy:
    src: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    dest: "{{ backup_location }}/{{ inventory_hostname }}/{{ inventory_hostname }}-httpd-data.tgz"
  delegate_to: "{{ backup_storage }}"

Taking a look at the above task file, you can see that it first creates an archive of the Apache configuration files. Really, it’s more or less a zip file.

It pulls the archive off the server, then pushes it over to a backup server.

It then repeats these actions for the data directories.

The next playbook is called provision-new-server.yml. I’ll leave you to look at it if you like, but it:
Connects to vcenter and provisions a new server
Waits for the server to pull an IP address
Adds the new host to the inventory via the Ascender API

Now that the old server is backed up and the new server has been provisioned, it’s time to restore some services on the new. This is done with the restore.yml playbook:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
- name: Playbook to restore configs on new servers
  hosts: migration-hosts 
  gather_facts: false
  vars:
    # The host to store backup info to
    backup_storage: backup-storage
 
    # The location on the backup host to store info
    backup_location: /tmp/migration
 
  tasks:
  - name: Set the restore server variables
    ansible.builtin.set_fact:
      restore_server: "new-{{ inventory_hostname }}"
 
  # - name: Debug restore_server
  #   ansible.builtin.debug:
  #     var: restore_server
 
  # grab a list of the files on the backup server for this host
  - name: Find all files in hosts' backup directories
    ansible.builtin.find:
      paths: "{{ backup_location }}/{{ inventory_hostname }}"
#      recurse: yes
    delegate_to: "{{ backup_storage }}"
    register: config_files
 
  # - name: Debug config_files
  #   when: item.path is search(inventory_hostname + '-httpd.tgz')
  #   ansible.builtin.debug:
  #     var: config_files
  #   loop: "{{ config_files.files }}"
 
  # for each task type, loop through backup files and see if they exist - call restore task file
  - name: If apache is installed, call install task file
    when: item.path is search(inventory_hostname + '-httpd.tgz')
    ansible.builtin.include_tasks: 
      file: apache-restore.yml
    loop: "{{ config_files.files }}"

The first task in the above sets a restore_server variable to the name of the new server. My playbooks I named the new server “new-{{ inventory_hostname }}”. This means it’s the name of the old server with “new-” on the front… not overly complex, but it does the trick.

The second task will search the backup folder’s directory and find all files that have been backed up for each host.

Somewhat similar to the backup procedure, the last task in the restore procedure is to loop over the files from the backup server, then calling task files for the various applications/packages. In this case, I’m looking for the Apache backup files, and when found, running the apache-restore.yml task file.

Next is to examine the apache-restore.yml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# Task file for installing and configuring apache
 
# - name: Debug restore_server
#   ansible.builtin.debug:
#     var: restore_server
 
# Install apache
- name: Install apache
  ansible.builtin.dnf:
    name: httpd
    state: latest
  delegate_to: "{{ restore_server }}"
 
- name: Copy apache config files to ansible server
  ansible.builtin.fetch:
    src: "{{ backup_location }}/{{ inventory_hostname }}/{{ inventory_hostname }}-httpd.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    flat: true # Changes default fetch so it will save directly in destination
  delegate_to: "{{ backup_storage }}"
 
- name: Copy config archive to new server from local ansible server
  ansible.builtin.copy:
    src: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd.tgz"
  delegate_to: "{{ restore_server }}"
 
- name: Extract config archive
  ansible.builtin.unarchive:
    src: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    dest: /etc/httpd
    remote_src: true
  delegate_to: "{{ restore_server }}"
 
- name: Copy apache data files to ansible server
  ansible.builtin.fetch:
    src: "{{ backup_location }}/{{ inventory_hostname }}/{{ inventory_hostname }}-httpd-data.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    flat: true # Changes default fetch so it will save directly in destination
  delegate_to: "{{ backup_storage }}"
 
- name: Copy data archive to new server from local ansible server
  ansible.builtin.copy:
    src: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
  delegate_to: "{{ restore_server }}"
 
- name: Extract config archive
  ansible.builtin.unarchive:
    src: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    dest: /var/www
    remote_src: true
  delegate_to: "{{ restore_server }}"
 
- name: Start service httpd and enable it on boot
  ansible.builtin.service:
    name: httpd
    state: started
    enabled: yes
  delegate_to: "{{ restore_server }}"

The above is quite simple. First things first, I install Apache. Next I connect to the backup server, copy the archive config files over, and extract them. I then do the same thing for the data files. Last, I start and enable the Apache service.

After this, I run the suspend-old.yml playbook to pause the old VM.

Very last, I’ll run my testing playbooks that are designed for each app.

Ascender Configuration

I’ve covered adding inventories, projects, and job templates in other blog posts.

I will show the workflow template I created to tie all of the job templates together, though:

A workflow allows me to take playbooks of all sorts and string them together with branching on success or on failure logic. It also allows me to make my playbooks flexible and reusable.

Conclusion

Migrating infrastructure is often complex and time consuming, and while we can’t get more hours or employees to complete the task, we can employ our secret weapon, automation.

CIQ is ready to help you not only standup Ascender in your environment, but to also o experts at helping you migrate your infrastructure. We have tools to assist and at the end you have the automations for your environment ready for continued and future use!

As always, thanks for reading and I appreciate your feedback; happy migrating!

Jan 21 / Greg

Why Am I Princess Etch

Hey everybody, I’m Greg Sowell and this is Why Am I, a podcast where I talk to interesting people and try to trace a path to where they find themselves today.  My guest this go around is Jane Labowitch, better known as Princess Etch.  As the royal name implies, she is an artist that uses an etch-a-sketch as her medium.  In this chat, I follow her down the rabbit hole on how each etch performs differently and how it takes time to find the right one for the job.  She also shares how the video you see is backed by hours upon hours of work that are completely invisible…man, I love artists 🙂  I hope you enjoy this conversation with Jane. Help us grow by sharing with someone!

Youtube version here:

Please show them some love on their socials here: https://princessetch.com/,

https://www.tiktok.com/@princessetch,

https://www.instagram.com/princessetch/,

https://www.patreon.com/m/princessetch.

If you want to support the podcast you can do so via https://www.patreon.com/whyamipod (this gives you access to bonus content including their Fantasy Restaurant!)

Jan 15 / Greg

Fantasy Restaurant Kelly Edwards

Welcome to the warmup exercise for the Why Am I podcast called “the Fantasy Restaurant.”  In here my guests get to pick their favorite: drink, appetizer, main, sides, and dessert…anything goes. Join us in the slow moving and beautiful south of France as we step inside of a building we are transported to DC 40 years ago to a bustling kitchen full of family and love.  Oh, also there’s a bottomless basket of bacon LOL.  I hope you enjoy this meal with Kelly. Help us grow by sharing with someone!

Youtube version here:

Please show them some love on their socials here: https://kellyedwards.co/,

https://www.instagram.com/kellyedwards_co/,

https://a.co/d/cmXWIj9,

https://www.facebook.com/kellyedwardsco.

If you want to support the podcast you can do so via https://www.patreon.com/whyamipod (this gives you access to bonus content including their Fantasy Restaurant!)

Jan 7 / Greg

Why Am I Kelly Edwards

Hey everybody, I’m Greg Sowell and this is Why Am I, a podcast where I talk to interesting people and try to trace a path to where they find themselves today.  My guest this go around is Kelly Edwards.  She’s got an impressive resume: former exec in LA, film producer, writer, teacher…but she only touches on those things.  They don’t define the person inside of Kelly, because she’s a phoenix that is constantly being reborn into a completely different person.  Not only that, but she opens her eyes each day excited to see who she’ll become.  I hope you enjoy this chat with Kelly. Help us grow by sharing with someone!

Youtube version here:

Please show them some love on their socials here: https://kellyedwards.co/,

https://www.instagram.com/kellyedwards_co/,

https://a.co/d/cmXWIj9,

https://www.facebook.com/kellyedwardsco.

If you want to support the podcast you can do so via https://www.patreon.com/whyamipod (this gives you access to bonus content including their Fantasy Restaurant!)

Jan 1 / Greg

Review of In And Of Itself – with Kristi Sowell

Hey everybody, I’m Greg Sowell and this is Why Am I, a podcast where I USUALLY talk to interesting people and try to trace a path to where they find themselves today.  This time around Kristi jumps in and discusses one of my favorite performances “In And Of Itself”. It’s us breaking down what parts ment the most to us…and I learn a couple of things(as always LOL)… and yes, I get in my feels…try and power through LOLOL. Help us grow by sharing with someone!

Youtube version here:

Please show them some love on their socials here: https://www.instagram.com/sowellkr/⁠, https://www.instagram.com/cardiocandy/?hl=en, https://www.instagram.com/selfvisualized/?hl=en.

If you want to support the podcast you can do so via https://www.patreon.com/whyamipod (this gives you access to bonus content including their Fantasy Restaurant!)

Dec 24 / Greg

Fantasy Restaurant Stan Zimmerman

Welcome to the warmup exercise for the Why Am I podcast called “the Fantasy Restaurant.”  In here my guests get to pick their favorite: drink, appetizer, main, sides, and dessert…anything goes. We start this meal out light, then, thankfully, come to our senses and start just putting garlic on all of the things…we even have a triple dessert!  I hope you enjoy this meal with Stan. Help us grow by sharing with someone!

Youtube version here:

Please show them some love on their socials here: https://www.zimmermanstan.com/, https://www.instagram.com/zimmermanstan/?hl=en, https://www.amazon.com/dp/1954676603.

If you want to support the podcast you can do so via https://www.patreon.com/whyamipod (this gives you access to bonus content including their Fantasy Restaurant!)

Dec 17 / Greg

Why Am I Stan Zimmerman

Hey everybody, I’m Greg Sowell and this is Why Am I, a podcast where I talk to interesting people and try to trace a path to where they find themselves today.  My guest this go around is Stan Zimmerman.  If there is a thing you can do in Hollywood, I’m pretty sure Stan as done it somewhere in his 3 decade plus career.  He’s written episodes from Golden Girls to Gilmore Girls(which helped inspire the title and theme of his new book).  I told you this kid has done it all.  He’s also incredibly open, honest, empathetic, but most importantly funny :).  I hope you enjoy this chat with Stan. Help us grow by sharing with someone!

Youtube version here:

Please show them some love on their socials here: https://www.zimmermanstan.com/, https://www.instagram.com/zimmermanstan/?hl=en, https://www.amazon.com/dp/1954676603.

If you want to support the podcast you can do so via https://www.patreon.com/whyamipod (this gives you access to bonus content including their Fantasy Restaurant!)