Skip to content
Dec 1 / Greg

Automating Infoblox DDI With The Ansible Automation Platform


Infoblox DDI is a very powerful/popular DHCP, DNS, and IPAM system used by enterprises world wide. I’ve heard customer after customer talk about it, so I thought I would take a look at adding it to my demos. Fortunately it’s super simple to sign up for a demo copy that will give you a repeatable 60 day trial. I grabbed the VMWare OVA file, told it to boot, gave it an IP, and then I was up and running.

Demo Video

Github Repo

Git Repo found here.

Building My Lab

My first playbook connects to the DDI server and builds the simple environment:

First things first; you are going to see this section in use with all of the nios modules and plugins:

1
2
3
4
5
  vars:
    nios_provider:
      host: "{{ ddi_host }}"
      username: "{{ ddi_username }}"
      password: "{{ ddi_password }}"

This is the connection information used to access the DDI server. I’m passing the info into the playbook at runtime via a custom credential in Tower(my favorite way to store and use special credentials).

The first task utilizes a loop to create two forward zones, gregsowell.com and tacotuesday.com.
I then loop again and using the zone type of IPv4 create a couple of reverse IPv4 zones.
Both of these were dead simple to use.

Add Hosts And Next IPs

This playbook will take a hostname, and in this case the variable name for it is test_host and add it to DDI.

The second important variable is the subnet_range; this is the subnet the host’s IP address will be sourced from.

The first task will use the lookup host record plugin and check to see if the host entry already exists.
If it does exist, it will print out a message that says as much.
If the host entry doesn’t exist, the following tasks will use the “nios_next_ip” lookup plugin and create a host entry. The lookup plugin is really clever; it will query the subnet and return the next available IP in the range for use in your automation. You can then take that IP and assign it to your host.

Provision VMs Utilizing DDI Next IP

This playbook utilizes VMWare templates to provision new hosts. I’ve coupled the following playbook with surveys in my tower instance to clone VMWare templates to create new hosts, then use DDI to assign an IP to the host, and finally add the host entry which goes into the IPAM and DNS forward and reverse entries.

This playbook checks to see if the IP address is set to “ddi”; if it is, then it indicates DDI should do the lookup/creation of the IP.
It will first delete any old host entries for the specified host name, then it does a lookup for a new IP, it applies that IP to the VMWare template, and last it will register the host entry in DDI.

Conclusion

This brought a lot of pieces together for me(how automating server creation could tie all of the IP/DNS pieces together). I like the interface, it’s power, and it’s simplicity. Please leave me any questions or comments.

Thanks and happy automating.

Nov 22 / thebrotherswisp

The Brothers WISP 124 – Netonix SFP+, What Happened To Youtube, Incident Response

This week we have Greg, Nick A, Justin Miller, and Mike for a rather chill conversation.

**Sponsors**
Sonar.software
Cambium ePMP Bundle
Kwikbit.com
Towercoverage.com
**/Sponsors**

This week we talk about:
Valve Index VR Rig
ubnt 60ghz air fiber
Netonix 14 port PoE with 2 SFP+ ports
Posed by anonomous in the Slack; why does MTK stick with the 2.5 dBi antennas wen everyone 4-6 on their AC kit?
Hitting VirusTotals API with automation
Downdector wants ~$24k a year for access to their API…yeah, nope!
What happened on Youtubes Nov 11th global outage…I can’t find the answer? Nanog list had post of “blocked by CORs policy.”
DDoS with Fastnetmon mitigation.
wiki.js.org
My first outage since leaving the old job(fiber cut at my MDU).
Incident response

Here’s the video:(if you don’t see it, hit refresh)

Nov 16 / Greg

Using ServiceNow As An CMDB In Ansible Automation Platform

Pulling in hosts from SNOW really isn’t too bad, and in fact, there’s an Ansible blog post on it here. I’m not reinventing the wheel here, but I figure it’s always nice to have another perspective on the process.

Demo Video

SNOW CMDBs

First things first, where can I find the SNOW CMBDs? I use a developer instance, so there is already some material to work with, so your install may look different.
Here’s an article that shows how to quickly add entries to your CMDB, but also where to access everything.


In SNOW type in “ci class manager” in the search field. Once there, click on the “Hierarchy” button.


In here I browsed under hardware, network gear, and finally into ip switch.


Once there to view all of the devices(or to add/edit them), click “ci list”. These entries were added by me; I put the manufacturer and IP addresses specifically. In the Tower section I use the manufacturer entry to assign the devices to a group as they are imported.


If I’m curious about what columns are available in the table, I’ll browse the attribute section under class info.

Playbooks

Relevant Github found here.

First I’m using a collection in the dynamic inventory script. This is a new method Ansible is using to package files together. I can manually install the collection ahead of time, or I can specify it in a requirements file and it will be pulled at run-time. The requirements.yml file is saved in a folder named collections. Here’s the contents of my requirements file:

Now that I’ve got that out of the way, I’ll take a look at the dynamic inventory file. In this case I’ve named it snow-switch-now.yml. Keep in mind that there is some arbitrary requirement that this file end with “now.yml”, so keep that in mind.

Having a look at this file, the plugin will always be the same(just how we are pulling everything).
The table will change based on which specific CMBD I want to pull. In this case I want all of the ip_switches. This name can be found above in the SNOW “ci class manager” section.

The fields section are the info that will be returned. Ultimately these will be added as hostvars for the inventory objects.

They keyed_groups section will add the hosts to groups based on returned CMDB information. In this case I’m going based on manufacturer, so all of my Cisco kit will end up in the cisco group.

Tower

First I need to setup a credential, and I created a custom credential to pass in my SNOW instance, username, and password via environment variables:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
fields:
  - id: supp_snow_instance
    type: string
    label: Instance Name
  - id: supp_snow_username
    type: string
    label: Username
  - id: supp_snow_password
    type: string
    label: Password
    secret: true
required:
  - supp_snow_instance
  - supp_snow_username
  - supp_snow_password
1
2
3
4
env:
  SN_INSTANCE: '{{ supp_snow_instance }}'
  SN_PASSWORD: '{{ supp_snow_password }}'
  SN_USERNAME: '{{ supp_snow_username }}'

Once that’s created and I added in my custom credential, I then create the inventory.

I’ll go ahead and add the repo I just created as a project; this will pull in all of my custom inventory yaml files.


Where it diverges is that I click the sources section and add a custom source.


Once in the custom source I choose sourced from project, use my SNOW credentials I just created, choose the project holding my custom import script, and choose the yaml file.

Now when I syncronize the inventory it pulls in my CMDB objects from SNOW:

Conclusion

While none of this is too terribly difficult, the first one can take a few minutes to sort out. Most of my problems surrounded formatting on the import yaml file, so be sure all of your column names are correct. I like the flexibility and power of this; how it allows you to very granularly separate inventories based on tables.

Let me know how you can see using this, questions, and comments.

Thanks and happy automating!

Nov 13 / Greg

Access The Virustotal API Via Ansible

I’ve been doing more security related automation lately(finding it really interesting). Part of a lot of folks process is to get an alert, then do some investigation enrichment. One of the resources people use is a great site called virustotal.com. It allows you to put in a URL or upload a file, and it will check that against it’s database of services to see if anyone has reported malicious activity from it. While it is a pretty fast process, it is still a human one…how can this be made more efficient…well they DO have an API 🙂

They have a free API for non commercial use that has a rate-limit as well as slightly fewer features than the pay version. I’ll be using the free version for my demo.

Demo Video

Ansible Playbook

Security repo here.

This playbook is at the heart of it all.
At the top I’m using a couple of important variables.
One is the api_key which is retrieved from your virustotal account. Here I’m injecting mine at run time via a custom credential in Tower.
Another is test_url, which is the website you want to test against. The malware.wincar URL is a testing one that will show results on a virustotal scan.

The first task that is run uses the uri module. This is really the default module I use when interacting with various APIs. It’s almost too simple; I call the api and variable replace the API key and the URL to be tested. Also notice that I save the returned output as the variable total_out.

1
2
3
4
5
  - name: hit virus total and save results to variable
    uri:
      url: "https://www.virustotal.com/vtapi/v2/url/report?apikey={{ api_key }}&resource={{ test_url }}"
      return_content: yes
    register: total_out

I then do some formatting by using a filter to take the dictionary output and format it to a list:

1
2
3
  - name: set new variable
    set_fact:
      total_scans: "{{ total_out.json.scans | dict2items }}"

Next I loop through the formatted output and count how many virus total malicious hits were made:

1
2
3
4
5
  - name: total_out counts
    when: item.value.detected
    set_fact:
      total_mal: "{{ total_mal | int + 1 }}"
    loop: "{{ total_scans}}"

In the end I simply output how many are matched. In production I would add this to a workflow where if there were 2 or more matches, then I would flag the content for sure. Really once flagged any other operation can be performed; say updating an incident ticket and then notifications sent or a quarantine can be initiated.

What would you do differently or how can you see using this in your environment?
Thanks and happy automating!

Nov 9 / thebrotherswisp

The Brothers WISP 123 – Docker, Open Source Routing, Route Optimization, Role Transition



This week we have Greg, Nick A, Nick B, Tommy C, and Mike…a packed house! We did have some audio issues on this one, so expect a little noise; sorry folks!

**Sponsors**
Sonar.software
Cambium ePMP Bundle
Kwikbit.com
Towercoverage.com
**/Sponsors**

This week we talk about:
Automation for deploying CyberArk Conjur…was that ever a pain in the butt.
downdetector has an enterprise version with an API…let me see if I can get access.
Border6 and other route optimizers.
Would you run all open source routing (FRR) sans Cumulus or VyOS? How risk averse are you…how much time do you have to dedicate? Have I just gotten too old?
What happens when a community on the web disappears: geocities.
Role transition and being OK with that.

Here’s the video:(if you don’t see it, hit refresh)

Nov 4 / Greg

Deploying And Using CyberArk Conjur With Ansible Tower

First I have to say this wasn’t as simple and straight forward as I could have hoped for…in fact it took me the better part of 1.5 days to get working, and even then I had to get help from Jody Hunt from CyberArk(HUGE thanks to him). In fact, I partially followed his guide here to get started. I figure it took me enough work to get it stood up, so why not go ahead and write a playbook to do all of it for me…and now I’ve made it available to you!

Video Demonstration

Environment Building

To get the infrastructure up and working I followed the Conjur Opensouce guide here. This walks through most of the CLI commands required to get the containers up and working. If you wanted to do a manual deploy you can pretty much just follow along with copy/paste.

First step is to clone the conjur-quickstart repo. After that docker-compose is used to pull the various containers, generate a data key, load the key in as an environment variable, then start prepping the conjur environment.

Once this is complete the structure of the system is loaded in via a policy. A policy dictates what elements exist: hosts, users, variables(things that hold passwords and the like). They also determine what devices can access what things.

I created this simple flat policy that has two variables, one named password and one named ansible(I’m loading it in as a jinja template, though I’m not changing anything at the moment):

Looking at the network.yml file that is created you can see where I setup my variables at the top, then I specify a host(it will be my tower server, so I named it tower), and the last is permitting tower to read/execute the above variables. When I load this policy in, it will generate the API key that the tower user will use, so I pipe this to a file for easy reference later.

After I load that in I store a secret for the variables:

1
2
docker-compose exec client conjur variable values add password redhat
docker-compose exec client conjur variable values add ansible redhat

Deployment From Ansible

Here is the playbook for full deployment:

I’ll break this down a little at a time.
In the vars section at the top I’m specifying which folder the cloned quickstart repo will go in. This is also where all of the key files will be stored.
Next I set what the passwords for my “password” and “ansible” variables will be(I set them to redhat by default).

One of the quirks of connecting conjur to tower is that it requires the host name of “proxy” to be defined in tower’s host file(the name “proxy” is what tower uses to connect later on in the credential section). So the first task adds an entry to tower’s hosts file with the IP of the ca-conjur host.

I next create the storage directory, and then install some yum utilities and add the docker repo.

Next the yum module is used to install docker and git.

Since this is a lab server I kill the linux firewall, then crank on docker.

I now grab the docker compose binary and set it to executable. Docker compose is what’s used hence forth to interact with the containers.

After that a simple git clone is done on the quickstart repo.

Following up begins all of the docker-compose commands. These commands will eventually create four files in the quickstart folder: data_key, admin_data, tower_data, and tower_pki_cert.
The data_key file will store the key used to connect to the myConjurAccount(the account I create everything in).
admin_data holds the key for the admin account.
tower_data holds the API key for the tower host that’s used to connect to conjur from tower.
tower_pki_cert contains the cert that is used to connect to conjur from tower.

At the end of the script it spits out almost all of the info required to create the custom cert(the only thing not displayed is the cert because formatting gets messed up).
This is an example of the final output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
TASK [print out user info] *****************************************************
ok: [ca-conjur] => {
    "msg": [
        {
            "Conjur URL": "https://proxy:8443"
        },
        "API Key:       \"api_key\": \"2kdm28c37ebx1akqtxzynheecjt9zk4ydwt0jb14hm91d2gtqpq2\"",
        {
            "Account": "myConjurAccount"
        },
        {
            "Username": "host/tower"
        },
        "Public Key Cert: in file at /opt/conjur-repo/tower_pki_cert.  I couldn't get formatting to look right for copy paste, so grab from file."
    ]
}

Conjur Credential In Tower

So taking the final output from the script above I’ll fill in the proper details on tower:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
TASK [print out user info] *****************************************************
ok: [ca-conjur] => {
    "msg": [
        {
            "Conjur URL": "https://proxy:8443"
        },
        "API Key:       \"api_key\": \"2kdm28c37ebx1akqtxzynheecjt9zk4ydwt0jb14hm91d2gtqpq2\"",
        {
            "Account": "myConjurAccount"
        },
        {
            "Username": "host/tower"
        },
        "Public Key Cert: in file at /opt/conjur-repo/tower_pki_cert.  I couldn't get formatting to look right for copy paste, so grab from file."
    ]
}

Here’s the screenshot of cating the tower_pki_cert file and applying all of the info to my tower:

**notice that in the above host API key that the last \ isn’t part of the key**

With all of that info in place, click the test button and try the password or ansible variable:

Now to utilize this credential as a machine cred, create a standard machine cred, but click the magnifying glass on password:

Now I choose the Conjur credential I just created:

Once I click “Next” I can put in the variable path; in this case it would be password or ansible:

Troubleshooting

You can log into the conjur client from the CLI using the following:

1
docker-compose exec client conjur authn login -u admin -p fvdkz72tjb7pd3twtv731csqd1523bxezg7vvn72g3qf7r29zpwyt

You can find the password in the above by catting the admin_data file and looking for the “API key for admin” section:

1
2
3
4
5
6
7
8
9
10
11
12
[[email protected] conjur-repo]# cat admin_data
Created new account 'myConjurAccount'
Token-Signing Public Key: -----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyNnZc4T17gatYNfWwByu
OwpsgeI1Nc/NuUbcVS88R2e0VsI0EM6A8yN0/e0y9PxppbNnKRN3E7OEo7MWw6oI
WBWzDJtfexDXPVJLYfbJ0iwjXESRevaORyA58Lh9LOtp2ByUpQqEViGp0mxdDNJb
92ZdyCWhzAtMKhsqjQ38ZCIap0SCPSyZgZRTzh1cWeZaLpKGDXWy8cHEmeGHGeyQ
8yMtITUkzP2SFLuJQjahWlI/WlL8+Lm1yV5NZFv8WN0KhwCRB332RMVyJzaTwdqa
xMxlmYUbqVaGO4Lq2LatOyGGGmIzfKV6HS+0ynDm8w4p8yeoTvOu9Y4kq43jMcPo
+wIDAQAB
-----END PUBLIC KEY-----
API key for admin: 1wx2sj11a5wv821t62p133mw11xj1gm27011gjmtpp1dhcdqv2pk3617

Once logged in I can view the configured infrastructure with the conjur list command:

1
2
3
4
5
6
7
[[email protected] conjur-repo]# docker-compose exec client conjur list
[
  "myConjurAccount:policy:root",
  "myConjurAccount:variable:password",
  "myConjurAccount:variable:ansible",
  "myConjurAccount:host:tower"
]

^^ In the above myConfurAccount is the system account. You can also see the variables in the system along with their location structure. I also see my configured hosts.

Now that I see my variables listed I can view their contents like so:

1
2
[[email protected] conjur-repo]# docker-compose exec client conjur variable value password
redhat

If I want to check out what my docker containers are doing I can issue the following:

1
2
3
4
5
6
7
[[email protected] conjur-repo]# docker ps
CONTAINER ID        IMAGE                            COMMAND                  CREATED             STATUS              PORTS                           NAMES
0fcd7aab279f        cyberark/conjur-cli:5            "sleep infinity"         19 hours ago        Up 19 hours                                         conjur_client
c212738f44da        nginx:1.13.6-alpine              "nginx -g 'daemon of…"   19 hours ago        Up 19 hours         80/tcp, 0.0.0.0:8443->443/tcp   nginx_proxy
686336d019ec        cyberark/conjur                  "conjurctl server"       19 hours ago        Up 19 hours         80/tcp                          conjur_server
096ace7521c8        cfmanteiga/alpine-bash-curl-jq   "tail -F anything"       19 hours ago        Up 19 hours                                         bot_app
7b41709e8a27        postgres:10.14                   "docker-entrypoint.s…"   19 hours ago        Up 19 hours         5432/tcp                        postgres_database

The above command will list the various containers and their current states.

I can view their log files by issuing a docker logs command along with the container ID form the ps command:

1
2
3
4
5
[[email protected] conjur-repo]# docker logs c212738f44da
10.1.12.10 - - [03/Nov/2020:21:11:48 +0000] "POST /authn/myConjurAccount/host%2Ftower/authenticate HTTP/1.1" 200 632 "-" "python-requests/2.23.0"
10.1.12.10 - - [03/Nov/2020:21:11:48 +0000] "GET /secrets/myConjurAccount/variable/password HTTP/1.1" 200 16 "-" "python-requests/2.23.0"
10.1.12.10 - - [04/Nov/2020:14:46:04 +0000] "POST /authn/myConjurAccount/host%2Ftower/authenticate HTTP/1.1" 200 632 "-" "python-requests/2.23.0"
10.1.12.10 - - [04/Nov/2020:14:46:04 +0000] "GET /secrets/myConjurAccount/variable/password HTTP/1.1" 200 16 "-" "python-requests/2.23.0"

You can also follow the logs while you are testing with -f:

1
2
3
4
5
[[email protected] conjur-repo]# docker logs c212738f44da -f
10.1.12.10 - - [03/Nov/2020:21:11:48 +0000] "POST /authn/myConjurAccount/host%2Ftower/authenticate HTTP/1.1" 200 632 "-" "python-requests/2.23.0"
10.1.12.10 - - [03/Nov/2020:21:11:48 +0000] "GET /secrets/myConjurAccount/variable/password HTTP/1.1" 200 16 "-" "python-requests/2.23.0"
10.1.12.10 - - [04/Nov/2020:14:46:04 +0000] "POST /authn/myConjurAccount/host%2Ftower/authenticate HTTP/1.1" 200 632 "-" "python-requests/2.23.0"
10.1.12.10 - - [04/Nov/2020:14:46:04 +0000] "GET /secrets/myConjurAccount/variable/password HTTP/1.1" 200 16 "-" "python-requests/2.23.0"

Conclusion

So enterprise Conjur would be MUCH simpler to deal with since it has a GUI that walks you simply through all of these steps. I’d honestly like to give the enterprise edition a test drive just to see the differences, but for now, the system is done.

Let me know your questions and comments, and as always, happy automating!

Oct 27 / Greg

Cyberark Vault Integration With Ansible Tower

Cyberark has some impressive security tools, and in today’s example I’m using their Vault product. We are connecting in through their AIM(Application Identity Manager) system. Like other secrets engines, AIM allows me to pull secure credentials from it at run time. Also, I’d like to give a big shoutout to the team at CyberArk for providing us with this excellent demo environment!

Demo Video

CyberArk

First I create an application…honestly I was lazy and just used testappid.

Next you add authentication certificates to use with the app.

After this I browse to policies and hit access control. From here I add any safes that I need access to.

I then select the safe and edit the members list.

Add the app that needs access. I just wanted it to retrieve passwords, and not have any other access so, it only has retrieve.

After this I pop into the vault.

I open the safe in question. Here it’s “Test”.

I then create a password object. In my case it’s “ansible” and the password is redhat. **After this hit the Logoff button**

Tower Configuration

Tower has a custom credential lookup plugin to utilize CA AIM, which I’ll use as a lookup for other credentials.

First I add a new credential of type “CyberArk AIM Central Credential Provider Lookup”

Next I put in my CA AIM URL, application id(in this case testappid), the client key, and the client cert.

Now I create another standard credential of any type(in this instance I use a standard machine credential. For password I click the magnifying glass and it displays the Cyberark AIM credential I just created.

I now put in the object query based on the safe of Test and object of ansible.

1
safe=Test;object=ansible

Github Scripts

My CyberArk repo is here.

Here is a demo script using a custom credential to display the ansible password:

As you can see, all of the magic is done in the lookup plugin, so nothing special to show here; that’s what’s so cool about the lookup(it’s so simple and clean).

Here’s another version updating the backup user on some Cisco switches:

Conclusion

It was a bear to figure all of this out, but once you have your head wrapped around it, it’s really quite simple. I really like how clean it all is.

If you have any questions or comments, please let me know.
Thanks and happy CyberArking 😉