top of page
  • 5.15 Technologies

How to collect data from the Tenable Nessus API

While every GUI has its merits there often comes a time when you need to automate a process that relies upon the data in the GUI. How do we do this programmatically? The good news is that most modern software platforms come equipped with an API. If the platform you are using does not have an API, I highly recommend finding a new platform.


This blog will show you how to access the scan reports from your Tenable Nessus vulnerability scanner via the API. We will focus on the basics:

  • Setup

  • Connect

  • Authenticate

  • Collect

  • Process

  • Report

My environment is comprised of basic components required for this blog. I have my Python Interpreter, IDE (Pycharm), and an installation of Nessus Expert. You do not need to use Nessus Expert; this is what I use for penetration testing engagements. If you would like to follow allow with the coding practice, the source code is located here.

1. From the browser interface connect and authenticate with your username and password.


2. Once you login you will see your “SCANS” folders.


3. If you open one of the scans you will see a reports button. Select the CSV Radio button to see all the fields available for report downloads. We will use this to map out which fields we want to export and report on in CSV format.


4. Click Cancel. We do not want to run the report manually, we want to do this programmatically. Which we will do in a minute or so. Click the “Settings” button, then “My Account”, then “API Keys”.


NOTE: We could also authenticate using “Basic Auth”, i.e., username and password. Click “Generate”


5. You will be prompted that you will overwrite any existing keys. If you already have keys, use those, if not copy the keys presented for later use.

6. Your keys will look something like this below. You will need both.

7. Feel free to explore the API documentation on your tenant to understand the syntax and formats expected in your HTTP requests. For now, we will press-on to the task of automation.

8. Now that we have access, API keys, and some exposure to the documentation, let’s create a new Python File or Jupyter Notebook. Begin by adding Python libraries for import.


NOTE: You should comment out the line “import hidecreds.” That is my local vault library where I am storing my API Keys. You may not need that for your implementation.

import requests # Used for HTTP Requests.
import urllib3 # Used to Disable browser-based functionality
#import hidecreds  # my real creds are hidden in this python file. Not a best practice!
import json #Used to process JSON Data
import pandas as pd # Used to simplify JSON Data
from pandas import json_normalize # Used to simplify JSON Data
import pprint  # Used to simplify JSON Data
import os # Used to create or check for the existence of a folder.
import time # Obviously used for Time/Date Functions 😊

9. Next will disable the warnings about “Self-Signed Certificates”

# Disable SSL Warnings for my local install
urllib3.disable_warnings()

10. As we saw in the browser user interface this is the payload. We will use to determine what is downloaded and exported to CSV. You can reference this in the API documentation to learn more about what is and is not needed. The fields I want to use are highlighted in “BOLD" as "True.

payload = {
   "format": "csv",
   "reportContents": {
       "csvColumns": {
           "id": True,
           "cve": True,
           "cvss": True,
           "risk": True,
           "hostname": True,
           "protocol": True,
           "port": True,
           "plugin_name": False,
           "synopsis": False,
           "description": False,
           "solution": False,
           "see_also": False,
           "plugin_output": False,
           "stig_severity": False,
           "cvss3_base_score": False,
           "cvss_temporal_score": False,
           "cvss3_temporal_score": False,
           "risk_factor": False,
           "references": False,
           "plugin_information": False,
           "exploitable_with": False
        }
    },
   "extraFilters": {
       "host_ids": [],
       "plugin_ids": []
    }
}

11. The next code block will be out connection and authentication related data. This is where we need the API URL and the credentials we requested:

  • Access Key

  • Secrete Key

  • API/URL

  • Headers for API Key Authentication (This can be the tricky part)

# Connection and Authentication Strings
my_nessus_api_url = "https://localhost:8834"
accessKey = 'YOUR ACCESS KEY HERE in the Single Quotes'
secretKey = 'YOUR SECRET KEY HERE in the Single Quotes '

headers = {'Content-type': 'application/json', 'X-ApiKeys': f'accessKey={accessKey}; secretKey={secretKey}'}

sleepPeriod = 5

12. Now that we have most of the information and variables in the script we can create a session, authenticate, get a status code, and see some data.

# Use this headers format if you have your credentials' setup as listed above and not stored somewhere else!
# headers = {'Content-type': 'application/json', 'X-ApiKeys': f'accessKey={accessKey}; secretKey={secretKey}'}

# Create a session, this allows you to conduct multiple operation without authenticating repeatedly.
session = requests.session()
session.headers.update(headers)

# Connect and Authenticate
request = session.get(my_nessus_api_url + '/scans', verify=False)

# Print the Status Code, we are looking for a 200 OK
print(request.status_code)

# Print the data returned from the request
pprint.pprint(request.json())

13. That should have produced a “200 OK” response and a JSON response that looks like below.

200
{'folders': [{'custom': 0,
              'default_tag': 0,
              'id': 2,
              'name': 'Trash',
              'type': 'trash',
              'unread_count': None},
             {'custom': 0,
              'default_tag': 1,
              'id': 3,
              'name': 'My Scans',
              'type': 'main',
              'unread_count': 0}],
 'scans': [{'control': True,
            'creation_date': 1672167657,
            'enabled': True,
            'folder_id': 3,
            'id': 5,
            'last_modification_date': 1672168008,
            'live_results': 1,
            'name': 'Base_Advanced_Scan',
            'owner': 'adidonato',
            'read': True,
            'rrules': 'FREQ=DAILY;INTERVAL=1',
            'shared': False,
            'starttime': '20221110T130000',
            'status': 'completed',

14. Next, we will get some data and put it into a better format than raw JSON. I prefer Pandas.

# Create a data frame from the JSON, so we can read it easier!
# We are basically decoupling JSON into a spreadsheet in memory!

folders = json.loads(request.text)['folders']
scans = json.loads(request.text)['scans']

folders_df = json_normalize(folders)

list_of_folders = folders_df.id.to_list()
for item in list_of_folders:
    print(item)

scans_df = json_normalize(scans)

scans_dictionary = pd.Series(scans_df.name.values, index=scans_df.id).to_dict()
for scan_id, name in scans_dictionary.items():
   print(f'{scan_id}:{name}')
   

15. Now we can look at the results in a “table.” Like so… This helps me find exactly what I am looking for. To convert the JSON to a data frame we are using the Pandas class called “json_normalize”.

16. Since we are downloading data, we need to create a folder structure, if none exists.

# Now we will create a function to download (Collect) the data for each scan and dump them to a CSV file.
# Let's start by creating a directory to store the data. If one does not exist, create it.
data_storage = 'data'

if not os.path.exists(data_storage):
    print('Folder does not exist, creating...')
    os.makedirs(data_storage)
else:
    print('Folder Exists!')

17. With all the basic pieces in place we can start to build our primary routine to connect, authenticate, request data, and download the data. The logic here is simple, but lengthy. We enumerate all the keys and values from the dictionary we created from the data frame, which holds the JSON results of our initial request. This data frame has the “id” and “name” of each scan. We use that information to query the API and get the “Filename” and “Token.” With that information we request the report to be generated. Then we wait for it to complete. When it is “ready”, we download it in CSV format, based on the “Payload” we created early in the script. This will dump all the reports in your “My Scans” folder to a CSV file in the “data” directory.

# Using the existing session, from our list of scans (id), we will download each scan to a CSV.
for scan_id, name in scans_dictionary.items():
    scan_url = f'{my_nessus_api_url}/scans/{scan_id}/export'
    # get the file id
    scan_request = session.get(scanverify=False)
    jsonPayload = json.dumps(payload)
    r = requests.post(url=scan_url, headers=headers, data=jsonPayload, verify=False)
    jsonData = r.json()
    scanFile = str(jsonData['file'])
    scanToken = str(jsonData['token'])
    status = "loading"
# Use the file just received and check to see if it's 'ready', otherwise sleep for sleepPeriod seconds and try again
    while status != 'ready':
        URL = my_nessus_api_url + "/scans/" + str(scan_id) + "/export/" + scanFile + "/status"
        t = requests.get(url=URL, headers=headers, verify=False)
        data = t.json()
        if data['status'] == 'ready':
            status = 'ready'
        else:
           time.sleep(sleepPeriod)
# Now that the report is ready, download
    URL = my_nessus_api_url + "/scans/" + str(scan_id) + "/export/" + scanFile + "/download"
    d = requests.get(url=URL, headers=headers, verify=False)
    dataBack = d.text
# Clean up the CSV data
    csvData = dataBack.split('\r\n', -1)
    NAMECLEAN = name.replace('/', '-', -1)
   print("-----------------------------------------------")
   print("Starting Download " + NAMECLEAN)
    output_file = f'{data_storage}/{NAMECLEAN}.csv'
    with open(output_file,'w') as csvfile:
        for line in csvData:
           csvfile.writelines(line+'\n')
print('===================================')
print("All Tasks Completed!")

There is much more you can do with the API; I hope that this helps you get started on your journey.


Thank you for taking the time to review this article and feel free to contact us if your project needs more advanced capabilities.


Comentarios


bottom of page