Skip to content
Oct 14 / Greg

Provision Azure Windows/Linux VMs Using Ansible Automation Platform, Plus Post Provisioning


So not only will I be provisioning Windows and Linux VMs, but I’ll also be adding them to my inventory, doing post provision hardening, and doing a system scan.
Having said that, I’m going to be doing “art of the possible” on hardening and scanning as those will absolutely vary based on your needs, so it’s more fill in the blank for those.

Video Demo

Playbooks

All of my playbooks can be found here.
I’m going to start with the main provisioning playbook(found here):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
- name: Create Azure VM
  hosts: localhost
  gather_facts: false
  vars: 
    os_type: windows
#    os_type: linux
    inventory_name: Azure Manual
    vm_name: ZZZTestDeploy1
    # inject this at run time via custom credential
    vm_password: "{{ gen1_pword }}"
    vm_username: "{{ gen1_user }}"
    RG_name: cloud-shell-storage-eastus
    virtual_network: testvn001
    sec_group: secgroup001
    subnet_name: subnet001
  tasks:
  - name: Create resource group
    azure_rm_resourcegroup:
      name: "{{ RG_name }}"
      location: eastus
    tags:
    - never
    - setup
 
  - name: Create virtual network
    azure_rm_virtualnetwork:
      resource_group: "{{ RG_name }}"
      name: "{{ virtual_network }}"
      address_prefixes: "172.29.0.0/16"
    tags:
    - never
    - setup
 
  - name: Add subnet
    azure_rm_subnet:
      resource_group: "{{ RG_name }}"
      name: "{{ subnet_name }}"
      address_prefix: "172.29.0.0/24"
      virtual_network: "{{ virtual_network }}"
    tags:
    - never
    - setup
#  - name: Create public IP address
#    azure_rm_publicipaddress:
#      resource_group: myResourceGroup
#      allocation_method: Static
#      name: "{{ vm_name }}_PublicIP"
#    register: output_ip_address
#  - name: Public IP of VM
#    debug:
#      msg: "The public IP is {{ output_ip_address.state.ip_address }}."
 
  - name: Create security group that allows SSH/HTTP/HTTPS
    azure_rm_securitygroup:
      resource_group: "{{ RG_name }}"
      name: "{{ sec_group }}"
      rules:
        - name: SSH
          protocol: Tcp
          destination_port_range: 22
          access: Allow
          priority: 101
          direction: Inbound
        - name: HTTP
          protocol: Tcp
          destination_port_range: 80
          access: Allow
          priority: 102
          direction: Inbound
        - name: HTTPS
          protocol: Tcp
          destination_port_range: 443
          access: Allow
          priority: 103
          direction: Inbound
        - name: WINRM
          protocol: Tcp
          destination_port_range: 5986
          access: Allow
          priority: 104
          direction: Inbound
        - name: WINRMUN
          protocol: Tcp
          destination_port_range: 5985
          access: Allow
          priority: 105
          direction: Inbound
    tags:
    - never
    - setup

^^above is the first half of the playbook. You’ll notice that I setup some variables that will be used in the following section. I already have an azure environment setup, so I’m just duplicating those configs here. There’s also a vm_username and vm_password that are important. These are the admin credentials that will be added to the VM once it’s stood up. These I’m injecting in at run time via a custom credential, that way they are never stored plaintext in my repo…which is particularly important since it’s all public.

Once I reach the tasks section I do all of the azure setup, but again I’ve already got this in place. The good part of these modules is that they are idempotent, which means these can run and they will see that the state exists as expected, so no changes are made; they will simply show “ok”. It does, however, take a little bit of additional time to perform these steps, so I choose to add tags of “never” and “setup”. The never tag is a special one that says “don’t run this task unless the never tag or another tag on this task is invoked at run time.”

The completion of the play book is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
  - name: Create virtual network interface card
    azure_rm_networkinterface:
      resource_group: "{{ RG_name }}"
      name: "{{ vm_name }}_NIC"
      virtual_network: "{{ virtual_network }}"
      subnet: "{{ subnet_name }}"
#      public_ip_name: "{{ vm_name }}_PublicIP"
      security_group: "{{ sec_group }}"
    register: vm_nic
#  - name: debug vm_nic
#    debug:
#      var: vm_nic
 
  - name: Create Linux VM
    when: os_type != "windows"
    azure_rm_virtualmachine:
      resource_group: "{{ RG_name }}"
      name: "{{ vm_name }}"
      vm_size: Standard_DS1_v2
      admin_username: "{{ vm_username }}"
      admin_password: "{{ vm_password }}"
#      ssh_password_enabled: false
#      ssh_public_keys:
#        - path: /home/azureuser/.ssh/authorized_keys
#          key_data: "<key_data>"
      network_interfaces: "{{ vm_name }}_NIC"
      # os_type defaults to linux, so specify windows if needed
      image:
        offer: CentOS
        publisher: OpenLogic
        sku: '7.5'
        version: latest
 
  - name: windows provision block
    when: "os_type == 'windows'"
    block:
      - name: Create Windows VM
        azure_rm_virtualmachine:
          resource_group: "{{ RG_name }}"
          name: "{{ vm_name }}"
          vm_size: Standard_DS1_v2
          admin_username: "{{ vm_username }}"
          admin_password: "{{ vm_password }}"
  #        ssh_password_enabled: false
  #        ssh_public_keys:
  #          - path: /home/azureuser/.ssh/authorized_keys
  #            key_data: "<key_data>"
          network_interfaces: "{{ vm_name }}_NIC"
          # os_type defaults to linux, so specify windows if needed
          os_type: Windows
          open_ports:
            - 3389
            - 5986
            - 5985
            - 22
          image:
            offer: WindowsServer
            publisher: MicrosoftWindowsServer
            sku: 2019-Datacenter
            version: latest
 
      - name: Create VM script extension to enable HTTPS WinRM listener
        azure_rm_virtualmachineextension:
          name: winrm-extension
          resource_group: "{{ RG_name }}"
          virtual_machine_name: "{{ vm_name }}"
          publisher: Microsoft.Compute
          virtual_machine_extension_type: CustomScriptExtension
          type_handler_version: '1.9'
          settings: '{"fileUris": ["https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1"],"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File ConfigureRemotingForAnsible.ps1"}'
#          settings: '{"fileUris": ["https://raw.githubusercontent.com/gregsowell/ansible-windows/main/install-ssh.ps1"],"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File install-ssh.ps1"}'
          auto_upgrade_minor_version: true
        tags:
        - winrm
 
  - name: Add the host to AAP inventory
    awx.awx.host:
      name: "{{ vm_name }}"
      description: "Added via ansible"
      inventory: "{{ inventory_name }}"
      state: present
      variables:
        ansible_host: "{{ vm_nic.state.ip_configuration.private_ip_address }}"

^^There’s a bit to unpack here, but it’s mostly straight forward.
First I “create virtual network interface card” that shares a name with the VM being created. This will ensure all of the VMs NICs follow a naming convention that matches with the VM itself.

Next is the “Create Linux VM” task. I have a conditional statement in place that checks to see that I’m not provisioning a “windows” machine, and if this is the case it will spin up a CentOS box real quick like.

After that, and most interesting, is provisioning a Windows machine. Here I’m using a block along with a conditional. This means that when the os_type is set to “windows” it will attempt to perform all options within the block. First it cranks up a 2019-datacenter windows server. Note that in this task I have to add the os_type: windows variable; that’s because by default Azure assumes you want to provision a Linux VM(I found this telling). I also specify a few additional ports that should be opened in the devices firewall; all of them allowing remote admin access.
Next in the block is a custom script that will enable winRM so that AAP is able to remote into the device. Using the virtualmachineextension module, AAP will connect to Azure and instruct it to run this script on the server, which causes it to download and execute the winRM script.

Last it connects to the local AAP server and adds the newly created host to the Azure inventory using the IP reported from the creation of the VM NIC.

Since I’m really trying to deploy windows VMs(linux is easy LOL), I’ll add some additional playbooks for hardening and scanning. Here’s my hardening playbook, which is really just art of the possible:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
- name: Harden windows hosts in Azure
  hosts: "{{ vm_name }}"
  gather_facts: false
  vars:
  tasks:
  - name: block for windows hardening
    when: os_type == "windows"
    block:
      - name: win shell to ping self
        win_shell: ping 127.0.0.1
        register: ping_res
 
      - name: print ping results
        debug:
          var: ping_res.stdout_lines
 
      - name: import task 1
        debug:
          msg: windows block 1 for hardening

I’m running this as part of a workflow in AAP, so I’ve already supplied vm_name to the system, which is why it uses that as the hosts entry. If you recall from the previous playbook, the very last task was to add the host to my inventory for this very purpose.

Next, I’m doublechecking that this is a Windows machine being provisioned and if it is I’ll do my hardening. Again, in this I’m just showing that configuration is possible as I wanted to keep it as generic as possible. I suspect there would be firewall updates and password complexity requirements set; really whatever the corporate policy dictates. I’m simply using the win_shell module to execute a ping to the local host, and then displaying the results.

Last in my workflow will be performing some compliance checking actions like running a system scan. Again, fill in the blanks based on your local policies.

AAP Configuration

I make good use of custom credentials by supplying various username/password combos directly into my playbook. I won’t rehash that as I’ve written about it here.

I also added a lot of good Azure connection info here in a previous post about Satellite/Azure/AAP.

Here’s a quick shot of my workflow:

When the workflow above is run it will first, configure the VM, then split and run win/linux hardening. The hardening guides check for OS version and only run when the proper OS is detected. Last they converge in a system scan.

I could hardcode the VM name and type to provision, but I figured it made more sense to add a survey to the workflow to allow the user to be prompted at runtime for that information:

I suppose something else of interest is when I put the host into an inventory AAP won’t know how to connect unless I help it out. I could add the windows devices to a windows group and linux hosts to a linux group, and then specify settings accordingly to those groups. Version 2 of this will likely add some group work, but for now I just placed it directly on the inventory itself:

1
2
3
4
5
6
---
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_scheme: https
ansible_winrm_server_cert_validation: ignore
ansible_winrm_kerberos_delegation: true

^^These settings tell AAP how to connect to the windows VMs via WinRM.

Conclusion

None of this is earth shattering, but rather the boring foundation upon which infrastructure is built. While the topic can seem lofty, when you boil it all down, there really isn’t that many moving pieces, so I feel like the barrier to entry is actually pretty low. I’d love to see how you would modify the workflow to suit your needs, so leave me any feedback that you have.

Oct 11 / thebrotherswisp

The Brothers WISP 145 – Palooza New Vendors, Facebook Outage, MTK Newsletter 102

This week we have Greg, Greg L(https://www.linkedin.com/in/glipschitz/), Chad Wachs, and Mike Hammett…and yes, Pepperidge Farm remembers.

PS: Mike’s audio is bad for a few minutes, but we fix it up pretty quick.

**Sponsors**
Sonar.Software
Towercoverage.com
**/Sponsors**

This week we talk about:
WISPAPALOOZA – Swag, LTE products
Selling phone service
Facebook outage, I didn’t get a single support call…and yes my phones are working again.
OOB mgmt methods
work from home during covid
Mtk newsletter 102
DPSK on Ruckus h500s
I had to replace more FS SFP+s in my MDU, and this time went with industrial SFPs
Got to eat lunch with Doug Eames, Colin Z. and Grumpy Old Matt in Dallas.

Here’s the video:(if you don’t see it, hit refresh)

Sep 26 / thebrotherswisp

The Brothers WISP 144 – voip.ms DDoS, Meris Botnet, Cloud Mitigation

This week we have Greg and Nick A. with this week’s after school special.

**Sponsors**
Sonar.Software
Towercoverage.com
**/Sponsors**

This week we talk about:
Equipment lead times – seems slow, but moving.
voip.ms ddos
voip.ms ddos possibly extortion attack
bandwidth.com saw hours of outage today too…another victim?
Cloud based mitigation on services
Is your Mikrotik part of Meris botnet
Meris breakdown
Zach made a meric check playbook for ansible
Thrift reports some instability in the beta MLAG, but expects it to be sorted
6.49.rc1 *) winbox – added “interface-speed-100G” LED type to “System/LEDs” menu;

Here’s the video:(if you don’t see it, hit refresh)

Sep 13 / thebrotherswisp

The Brothers WISP 143 – ROS Docker, Zerotier, Cloud DNS

This week we have Greg, Nick A., and Alex Hart giving us the Hart to Heart.

**Sponsors**
Sonar.Software
Towercoverage.com
**/Sponsors**

This week we talk about:
ROS V7.rc3 adds docker containers(additional package)
Colin’s blog shout out
ROS V7.rc3 adds docker containers(additional package)
Thrift says 7rc1 mlag seems stable, but Ole says wifi is not on some of the older hardware
Zerotier has some folks excited…or perhaps spicy is a better word for it LOLOL
Mikrotik IP cloud bricked for a day or so

Here’s the video:(if you don’t see it, hit refresh)

Aug 29 / thebrotherswisp

The Brothers WISP 142 – ROS V7 RC, Unifi Talk, dPSK In MDUs

This week we have Greg and Nick A. giving our annual fire side chat.

**Sponsors**
Towercoverage.com
**/Sponsors**

This week we talk about:
Mikrotik ROS V7.1rc1 released
Unifi Talk
Satellite Azure deploy VM with AAP final config
dPSK vs SSID per resident in MDU
Aliexpress blackholes

Here’s the video:(if you don’t see it, hit refresh)

Aug 23 / Greg

Using Ansible Automation Platform For Post Configuration With Red Hat Satellite Provisioning Via Callbacks


That title is a mouthful.
In essence, what I’m doing is:
– Connecting my Satellite install to Azure so I can deploy RHEL images
– Once I use Satellite to deploy the image and it is provisioned, the VM will contact my AAP server via a “Callback”
– The AAP(call it Tower or Control, whichever you prefer) will then sync its inventory from Satellite, and connect to the VM to perform whatever configuration you like

Demo Video

Products Used

Azure is the cloud provider I’ll be deploying my VMs into. I’m using the free tier to do all my lab work in. Key here is to kill or pause anything you deploy but don’t use.

Red Hat Satellite is primarily a life-cycle management system for RHEL environments…think of it as a kind of WSUS for RHEL.

The Ansible Automation Platform is kind of all I talk about these days :). It’s the automation platform that will be doing the post provision configurations.

Red Hat Enterprise Linux(RHEL). Enterprise ready, fully supported Linux OS.

Satellite Install/Configuration

I started with a standard RHEL7 server in my lab with it pretty much a flat configuration. The role and following playbook gets Satellite up and running(found in my repo here). The role is from here originally.
The install playbook can be found here:

I won’t cover everything in the playbook, but in short it setsup a fully qualified domain name and hosts entry on the satellite server, issues a yum update, then calls the satellite install role. The FQDN configuration is required for satellite, otherwise it barfs.

After the install script completes I should be able to browse over to it(I’ve also created a DNS entry for the host):

AAP Configuration

This section ultimately sets up a callback method to active a job template. What it does is allows any host to issue a curl command to AAP and it will then fire off a job template directed towards that host. You can find the blog post about it here that I used as reference. Note that the method for calling this via Satellite as shown in that post did not work out of the box for me, so I pivoted to something else, which I’ll outline later. So the section where they are setting up the custom parameters can be skipped.

I need to setup credentials for AAP to connect to my satellite server so that it can pull all of the hosts into an inventory.
I’ll go to credentials and create a new entry entry with credential type of Red Hat Satellite 6:

I’ll now be able to use that to connect to a custom inventory source.

You’ll also need to now create a user account that matches what satellite will be setting up on the VMs when it provisions them. In my case it was user “ansible”, so I’ll create the cred too.

I’ll now create the inventory for my satellite server:


^^ This is one of the VERY important pieces here. In the inventory source I choose Red Hat Satellite 6 and it gives me some additional options. I can now choose the satellite credential I just created, thus it knows where to go to grab the info. I then enable all of the update options, this will cause the satellite server to be requeried each time this inventory is accessed. It will also clean out any old entries and only install active hosts from Satellite.
Now for an extremely important piece! Notice that I have something in the source variables section. The compose option will allow me to build any new variables I want from the information returned from the quireied source. In this case, everything returned has “foreman_” before the variable. The callback method being used here has a process by which it has to ensure the host calling has an entry in the inventory. If you are registering hosts in a DNS server as they are being provisioned, then all will work just fine as the inventory saves the FQDN in to connect to hosts. If, however, like me you aren’t doing auto DNS updates it will fail at this step. If I have the ansible_host variable present it will work just fine, sooooo that’s why I have the compose command create the ansible_host entry, it makes this all work sans DNS auto updates.
Source Variable settings:
---
compose:
ansible_host: foreman_ip

I should be able to sync the inventory and check one of the hosts to see that it created the ansible_host variable:


I’ll now add a job template in my AAP based on this playbook(obviously you will want to pull it in with the rest of the files via an inventory).

I’ll take a look at this playbook first, then look at the job template.
The playbook starts by making sure there isn’t already a host key saved in the system for this IP. For some reason when using the callback method it will add a key to the key file, so when you are labbing things it will cause you issues. This is why the first task clears out any entries for the host IP in question.

Next I setup the FQDN, then download and install the katello RPMs from the satellite server(you will need to update the satellite URL here).

After that I do a quick hack that fixes a wacky DNS issue. When you create an Azure resource group I know it assigns it a random cloudapp.net domain to reference those objects fully qualified. My entries in Satellite, however, are using my gregsowell.com domain, so when I issue the subscription-manager register command on the VM it creates a duplicate entry sourced from the hostname + the crazy cloudapp.net domain. So the hack around this is to create a katello.facts file and populate the network.fqdn entry with my inventory_hostname.

Now I register the host to the Satellite server using the subscription-manager register --force --org="NASA_JPL" --activationkey="library-activation" command. You’ll notice that there’s a second commented out entry, and I left this so that I would add in here how I attempted to use the “–name” option. I figured this would adjust the FQDN, but it does not LOL, so don’t waste your time with it.

I next pull down my satellite tools and install the katello packages.

This is technically the end of the playbook here, but it doesn’t have to be. This is the point where I could do further checking…say for example I can check which host group it’s a member of and call another playbook based on it for say a www group or perhaps a db group.
For example, I added this host using a Satellite host group named “default”, so it shows up in my inventory as being a member of the “foreman_default” group, which I could match on:

For that matter I could check the server name and see if it contains www or db and then process further. Keep in mind that this is just the kicking off point.

Now that I’ve looked at the playbook, I’ll examine the job template used:

The important bits here are the satellite2 inventory that I created above, I’m using the same credential set that I configured in Satellite, and I enable provisioning callbacks. Once PCs are enabled it will give you the provisioning callback URL as well as the host config key to utilize. The provisioning callback and host key are used in the compute resource Custom Script section below.

Azure Configuration

To provision VMs one needs to create several things. One is a resource group under your subscription. A RG holds all of your assets(networks, interfaces, VMs, etc.). One can manually create all of this or you can use AAP again to build it all for me. My playbook for prepping the environment is here.

The script first creates a storage account to keep all of your items in.
It then creates a virtual network and a subnet inside of that network.
Next it creates a security group and allows SSH in. Your mileage may vary on this, I’m just in a lab and there won’t be any public IPs on my resources so plan accordingly.
After that I create a virtual NIC so that I have something I can do test pings to. This is completely optional as I only have it in place for testing, but it’s not a bad idea.
The last section is prep work for my LAN to LAN VPN connection, which again, is optional depending on what you are doing in your environment. I currently have my AAP and satellite servers installed in my lab, so to reach the private addressing back and forth I have a VPN tunnel setup.

I ran into a lot of random issues like I hit a microsoft.compute resource missing error. What you need to do is enable it in Azure so that you can create virtual machines. What you want to do is go into your subscription > resource providers > search for microsoft.compute and add it:

Azure Resource Manager To Satellite Configuration

The official Red Hat guide I followed is here.

You first need to create an application that can be used by Satellite to login and manipulate resources on Azure. Follow the link here to get the info for the next section.

Infrastructure > Compute Resources > create Compute Resource

All of the info here is derived from adding your application above as well as the subscription section itself.

Once you fill in the blanks you want to load in the different regions:

Then once you choose a region, any resource provisioned from this Azure resource will forever be in this region. Something to keep in mind is that not all resources are available in all regions. For example I live in Texas, so I was originally using the South Central region. I kept having failures trying to deploy standard RHEL7 images…and it turns out they aren’t available in this region. Soooo, my solution(since it is just a lab) was to deploy to the EastUS region that happily deploys the image. Once you create this compute resource in a region, you can’t go back and edit it. You can add an additional resource and just put it in a different region if you want, but these aren’t modifiable.

I’ll now want to add some images for this resource to be able to deploy.
from inside the compute resource click on images and create image:

I’ll now fill out the details for the image. Here I’m just going with the latest LVM RHEL7 image. Notice that you have to preface the name of the image with “marketplace://”. Also note that you can’t login directly as root, so create a username that works for you, since I’m using Ansible for all of the things I’m making the user Ansible.

Next, click on Compute Profile from inside of the Azure resource manager details. By default it has three t-shirt sizes: small, medium, and large. I’m going to click on small and do some additional configuration just for that(just for labbing I’m keeping it simple).

The first options are setting up some of the VM options:

The compute resource was created by my earlier Azure provisioning playbook(see above).
These images have a recommended disk size of Standard_D2s_v3, but by all means customize it as you see fit.
Username/password is my ansible creds I’ll use to connect in; this is also your opportunity to enter in an SSH key to be installed. I’m just going to be using username/pass.
I’ll now add a network interface(again from the previously provisioned playbook above):

I now have my Azure configured subnet available:

I’ll now add a storage device and give it about 50GB.

I’m going to scroll back up and add a custom script. I’ll show it first, then explain what it’s doing as this is VERY important:

Custom Script:nohup sleep 45 && curl --insecure --data "host_config_key=3216011a-750d-4e38-b387-f0d533ee" https://towerprivate.gregsowell.com:443/api/v2/job_templates/110/callback/ &
So first I used nohub. This will throw a command into the background, so that even if a terminal is disconnected it will still complete. This is important as Satellite will connect to the VM, throw the command, and leave.
Next it will sleep for 45 seconds. This is in place so that Satellite will have enough time to put the VM into it’s database, so that when AAP is called this host will show up in the inventory file.
After that I use the standard curl callback command that instructs my tower server to kickoff my specific callback job template.
Last I have a lone “&”, which will kick all of this command into the background.
This custom script command is what kicks off all of the magic.

Satellite Prereqs

Most likely you have all of this in place, I’m just putting this here as a reminder to myself since I built this from scratch.
I’m first going to add my subscription. Well, I suppose first you want to generate the manifest file and download it(follow this link for that).I’ve connected to access.redhat.com and downloaded my manifest, then imported the zip file into Satellite:

I’ll then add my repos:

Next I’ll search for “rhel-7” and hit the button for only available repos:

Next I’ll expand, then hit the + on any repos I want. In this case I want satellite tools for sure as I’m going to be pushing those in my demo:

Now I synchronize my repos:

You’ll likely want to create a sync plan that will update your repos on a regular schedule.

I’m now creating an activation key: content > activation key:


You can see that I was lazy and didn’t create a new life-cycle, instead I just used the library, which is the catch all.
I’ll now hit the repos section and enable all of my repos:

Notice the activation key from the details menu, make note of it as I’ll be using it later in the process. It’s what the VMs phone back home with to register to the Satellite server:

Create Satellite Host Group

Configure > host group

Now I create a new host group:

You will undoubtedly know more about all of the various lifecycles and content views you want to configure. The deploy on and compute profile are the sections of particular interest. I choose the azure resource and small compute profile just configured. It can also save a lot of time if you fill out all other sections that are of interest to you.
In this host group I’ll click on activation key and add the key I saved from the satellite prereq section:

Deploy VMs

In Satellite I go to hosts > create hosts:

I now choose the host group that I’ve created, and it will populate virtually everything I need:

I’ll now pop over to the operating system section and choose my rhel7 image and setup the root password:

Once I click submit it will take a minute or two, but I should see an AAP job kick off, first to update the inventory, then the job template will fire and complete.

If I go to hosts > content hosts, I can see that the newly provisioned VM is in the list, and it shows up as registered, which means the AAP job template completed and had the VM phone home to the Satellite server!

Conclusion

I know this has a lot of moving pieces, but if you already have your Satellite setup and you have your Azure environment up and working, then this is a really quick project. Most of the above is me building everything from scratch, which the vast majority of you won’t have to do LOL.

As with anything, there are likely faster and more efficient ways of doing these things, and if that’s the case, please let me know what you come up with/how you would do it. Questions/comments always welcome.

Thanks, and happy Satelliting!

Aug 16 / thebrotherswisp

The Brothers WISP 141 – Mikrotik RB5009, CCR2004, Marvell Prestera

This week we have Greg, Mike, Tommy, and Andrew Thrift doing some hardware catchup.

**Sponsors**
Towercoverage.com
**/Sponsors**

This week we talk about:
RB5009 video with Janis Megis!
RB5009 – 1GB RAM/Flash, 4 cour 1.4Ghz ARMv8, 7 gig ports, 1 2.5Gb, 1 SFP+ – $219 USD
RB5009 rack mount – 4 per 1U
New CCR2004 video with Pauls Jukonis
CCR2004-1G-12S+2XS – 16 copper gig ports, 2 SPF+, dual AC PSU, 2 switch chips each with 10Gb lane – $465 USD
V7.1 route filter updates
Thrift V7 testing CRS3XX
MCLAG/MLAG on CRS3XX testing
Marvell Acquiring Innovium for High-End Switch ASIC
Marvell Prestera 7k – 3 Chipsets. DX7312 300G (12x 25G SERDES), DX7325 1.2T (24x 50G SERDES), DX7335 1.6T (32x 50G SERDES) – MACSec, SRv6, 400G Optics
Speeduino

Here’s the video:(if you don’t see it, hit refresh)