Tenable Core (Tenable.sc/Nessus/NNM) on Nutanix AHV

Saptarshi Biswas
9 min readJun 29, 2020

--

The following provides installation instructions for deploying Tenable Core (Security Center, Nessus and Nessus Network Monitor) on Nutanix AHV.

Obtaining the Image for Nutanix AHV

Tenable Core products (Tenable.sc, Nessus and NNM) are available as an ISO, OVA any Hyper-V form factors. These images can be downloaded from https://www.tenable.com/downloads

Customer’s are required to login prior to accessing the images. Post successful login, downloads can be viewed under ‘Tenable Core’ as shown below:

Click on ‘View Downloads’ to view ‘Tenable Core + Nessus’ or Tenable Core + Nessus Network Monitor or Tenable Core + Tenable.sc as shown below:

Download the OVA image of any Tenable core products from the above list, that you wish to deploy on Nutanix AHV.

Install Tenable Core (Tenable.sc, NNM and Nessus) on Nutanix AHV

This section provides the steps to deploy the OVA image of any Tenable core products on Nutanix AHV (Acropolis hypervisor). For more details on Nutanix architecture functionality and features refer Nutanix Bible

Using Tenable Core OVA image

Before you begin:
Confirm your environment meets the requirements described in Tenable documentation:

Confirm your environment meets the internet and port requirements described in each of the above products. The same is available in docs.tenable.com

To install Tenable Core + Nessus/Tenable.sc/NNM on Nutanix AHV:

  • Unzip the Tenable core product OVAs using any extraction tools ex: 7-zip. The extraction would contain a vmdk image file which will be used for creating virtual appliances on AHV.
  • Upload the downloaded vmdk to Prism Image service (PE or PC).
  • Create a VM from Prism with the vCPU and RAM as mentioned in the above requirements section.
  • Add a vNIC and power on the VM. Launch the console and proceed with the below configuration.

Note: The base image of Tenable Core products is Centos 7.

  • Boot the VM in rescue mode
  • Login using the default credentials:
    Username: wizard
    Password: admin
  • Assign a static IP or skip it if using dhcp
  • Create an administrator account and set the password.
  • Login using the newly configured administrator account and follow the below steps:

uname -r (get the kernel version)

sudo cp /boot/initramfs-<kernel_version>.img /boot/initramfs-<kernel_version>.img.bak (create backup of current initramfs)

sudo dracut -f /boot/initramfs-<kernel_version>.img <kernel_version> (override current initramfs with new one; there will be no file for <kernel_version> you just type the number

  • Reboot the VM and choose the boot option as shown

Getting Started

  • Login to web UI https://<Tenable-ip>:8000 using the new administrator account.
  • Verify that the installation is successful, and the following is displayed: Hardware is ‘Nutanix AHV’ and verify all services are up and running. Also, no deployment / boot up errors in the logs.

What Next: Nessus Installation

Note: For this test we set up all the 3 VMs before getting into the configuration part. All 3 OVAs of the core product worked using the above approach. The below configuration assumes Tenable.sc is up and running and licenses are applied for Nessus and NNM in addition to SC licenses. For Nessus configuration with SC we followed the below steps:

Go to ‘Nessus’ on the left of Tenable Core UI main menu to display the ‘Nessus Installation Info’

Open the URL https://<tenable-ip>:8834 and select ‘Managed Scanner’.

Post successful completion of Nessus, verify the service is running as shown below:

Disk Management (for Nessus)

Tenable Core uses Linux LVM for disk management. Tenable Core installs or deploys with the following preconfigured partitions:

/boot
/swap
/
/var/log
/opt

Add or Expand Disk Space

To increase the disk space on the tenable core VM deployed on Nutanix AHV, administrators can choose to add a new disk or expand an existing disk.

Steps to increase disk space:

  • The new vDisk is attached to the vm as shown below:
  • Power on the tenable virtual machine and login to tenable core UI.
  • Click on ‘Storage’ in the left navigation menu, the new vDisk is displayed under the ‘Drives’ section:
  • In the Filesystems section, locate the file system with /opt as the Mount Point and note the file system Name (e.g., /dev/vg0/00).

NOTE: Typically, you want to add space to /opt. To add more storage space to a less common partition (for example, / or /var/log), locate the file system for that partition.

  • In the above example the /var/log is allocated 4.99 GiB and we would like to increase it. Select the partition and navigate to the volume group page.
  • In the ‘Physical Volumes’ section, click the + button to ‘Add Disks’
  • Select the new vDisk and click on add
  • - The physical volume will now be updated to show the new vDisk added to it
  • In the Logical Volumes section, expand the section for the file system Name that includes your preferred partition as the Mount Point. In our example /var/log
  • Click Grow — The Grow Logical Volume window appears.
  • Use the slider to increase the size of the file system to your desired size (typically, to the new maximum). Click Grow.
  • The system expands the logical volume and the file system.
  • Verify that the volume group size is also increased

Nessus Network Monitor on AHV

For deploying NNM on Nutanix AHV, we recommend having 2 separate vNICs — one for management IP and the other for capturing / monitoring network traffic. The 1st interface or eth0 can be configured from Nutanix Prism UI as a normal vNIC. However, for eth1 (to analyse traffic) Nutanix would need ‘Network Function vNIC’ and Service Chain in vTap mode.

More info on Service chain and Tap packet processor ref: https://nutanixbible.com/#anchor-book-of-ahv-service-chaining

NOTE: Service Chain configuration is not available on Prism UI and would need out of band Prism Central API calls to configure it.

The API calls work flow is ‘how to’ is explained in this article (Refer KB 5486: https://portal.nutanix.com/#/page/kbs/details?targetId=kA00e000000LIelCAG)

The service chain and a vNIC in vTap mode is configured manually via a Nutanix tech or SE and the whole workflow can be automated using Nutanix Calm blueprint. The finalised blueprint can also be published on Calm marketplace for any customer to use. But for this we would require an NNM qcow2 image that’s compatible with AHV.

For this test and validation we have manually created eth1 interface each for 2 NNM VMs in vTap mode and associated it with a Service Chain.

Prism Central UI will show the service chain and the number of associated VMs like the one below:

Each of the NNMs have to be deployed on separate AHV nodes and pinned to that host (as shown below). If the AHV node/host goes down, the NNM deployed on it does not migrate with other user VMs to another AHV node. The NNM on the destination AHV node monitors traffic of the migrated user VMs.

The network configuration of NNM will be displayed on Prism UI as shown below:

The 1st one would be the management vNIC assigned to a VLAN, in this case NR_PRT_STATIC and the other is the ‘Network function vNIC’. The same is visible in acli configuration of the VM.

nic_list {
connected: True
mac_addr: “50:6b:8d:4e:21:9e”
network_name: “NR_PRT_STATIC”
network_type: “kNativeNetwork”
network_uuid: “169fcae6–751e-443d-b066-c26c0cada0a1”
type: “kNormalNic”
uuid: “82afb2eb-52ea-48f1–98b4–8ba8a6110c14”
vlan_mode: “kAccess”
}

nic_list {
mac_addr: “50:6b:8d:cd:01:f7”
network_function_nic_type: “kTap”
network_type: “kNativeNetwork”
type: “kNetworkFunctionNic”
uuid: “0fd4e966–9565–4d0c-9593–76b652251736”
}

With all the configuration in place we are ready to power on the NNM VM and progress with the assigning the capture interface in NNM.

Post boot up and login, license would show up as not activated since its not yet connected to tenable.sc

Next would be launch the NNM UI https://<nnm-ip>:8835. The default username and password which are both ‘admin’ and change the default password.

Since this NNM VM will be managed by tenable.sc, configure accordingly.

Next choose the monitoring interface that will be used to capture traffic for analysis. In this case, it will be the ‘network function vNIC’ that we configured previously along with the subnets required to be monitored and click Finish.

By default, NNM will be operating in Asset discovery mode.

Navigate to https://<nnm-ip>:8000 → Networking → Interfaces → eth1

Change IPv4 settings to ‘Link Local’ and apply as shown below:

It should now display traffic being received by eth1

Uncheck ‘Run in Discovery Mode’ in NNM settings and dashboard should be populated with traffic analysis data on vulnerabilities detected

Tenable.sc Configuration: Add NNM

The newly provisioned NNM needs to be added to tenable.sc instance.

Navigate to https://<tenable.sc-ip>:443 → Resources → Nessus Network Monitors → Click Add

Fill in the NNM node details (using the IP of NNM node’s management interface), assign a repo and click submit

Verify the status of NNM nodes and Nessus nodes on tenable.sc. All 3 virtual appliances running on Nutanix AHV.

--

--

Saptarshi Biswas
Saptarshi Biswas

Written by Saptarshi Biswas

Continuous learner | Bookworm | ❤️ Animals | Bangalore 🇮🇳 | Opinions and thoughts are my own

No responses yet