01-15-2026 09:17 AM
I'm working on an analysis automation and starting with understanding the following example: https://github.com/ni/systemlink-server-examples/blob/master/jupyter/analysis-automation/BasicExampl...
However, when I attempt to execute the following chunk:
data_links = ni_analysis_automation["data_links"]
file_ids = [d["fileId"] for d in data_links]
file_ids = file_ids[0:40]
file_infos = await metadata_api.get_multiple_file_info(tdmreader.FileList(file_ids))
I receive an error on the last line:
HTTP response body: {"error":{"name":"TDMReader.DataFinderError","code":-253520,"message":"DataFinder responded with error: 'con 'c6' is not valid for this user'.","args":["con 'c6' is not valid for this user"],"innerErrors":[]}}
This happens regardless of whether the jupyter notebook is run from the hub, or if this notebook is executed via an analysis trigger. The trigger is set up to run with Advanced Analysis Privileges.
If I attempt to open the same file via diadem DFS it works no problem.
Just in case, here's the full code I have in my notebook when executed via trigger:
import os
import requests
import scrapbook as sb
import systemlink.clients.nitdmreader as tdmreader
metadata_api = tdmreader.MetadataApi()
data_api = tdmreader.DataApi()
# Retrieve environment variables
systemlink_uri = os.getenv("SYSTEMLINK_HTTP_URI")
systemlink_api_key = os.getenv("SYSTEMLINK_API_KEY")
cert_path = os.getenv("SYSTEMLINK_HTTPS_CERT_PATH")
data_links = ni_analysis_automation["data_links"]
file_ids = [d["fileId"] for d in data_links]
tdmreader.FileList(file_ids)
print(file_ids) # prints the expected dfs:/ paths for the triggered file
file_infos = await metadata_api.get_multiple_file_info(tdmreader.FileList(file_ids))
Anyone have suggestions?
01-16-2026 02:53 AM - edited 01-16-2026 03:21 AM
I was able to figure it out. Apparently the ni_analysis_automation variable that is populated when the Jupyter notebook is executed by the automation routine, does not have the correct session variable.
In order to resolve this I extract the ids from the list of files in the passed in variable, then reconstruct the file list:
import urllib.parse as ulp
import json
id_values = []
for link in ni_analysis_automation['data_links']:
parsed = ulp.urlparse(link['fileId'])
query_params = ulp.parse_qs(parsed.query)
# The 'query' param is a JSON string, so we need to load it
if 'query' in query_params:
query_json = json.loads(query_params['query'][0])
# Extract the id value
id_value = query_json.get('test', {}).get('id', None)
if id_value is not None:
id_values.append(id_value)
payload = {
"variables": {
"$URL": {
"stringArray": {"values": f"corbaname::#{datafinder_name}.ASAM-ODS"}
}
}
}
headers = {
"x-ni-api-key": systemlink_api_key,
"Content-Type": "application/x-asamods+json"
}
session_response = requests.post(
f"{systemlink_uri}ni/asam/ods",
json=payload,
headers=headers,
verify=cert_path
)
session = session_response.headers.get("Location")
file_ids = [f"dfs:/?url={session}&query={{\"test\":{{\"id\":{id_val}}}}}" for id_val in id_values]
# Call metadata service (for example)
file_infos = await metadata_api.get_multiple_file_info(tdmreader.FileList(file_ids))
Is this really the best solution? This is all that I could come up with.