I have a very complex nested JSON file and I used the following script to see how the data is looking.
From Python I write the dataframe into Alteryx:
from ayx import Package,Alteryx
import pandas as pd
import json
from flatten_json import flatten
from pandas.io.json import json_normalize
def load_json_file(file_path):
"""
Loads a JSON file and returns the data as a Python object.
Args:
file_path (str): The path to the JSON file.
Returns:
dict or list: The data loaded from the JSON file, or None if an error occurs.
"""
try:
with open(file_path, 'r') as file:
data = json.load(file)
return data
except FileNotFoundError:
print(f"Error: File not found at {file_path}")
return None
except json.JSONDecodeError:
print(f"Error: Invalid JSON format in {file_path}")
return None
except Exception as e:
print(f"An unexpected error occurred: {e}")
return None
# Example usage:
file_path = r'FileName1.json'
data = load_json_file(file_path)
flat_json = flatten(data)
df = pd.DataFrame([flat_json])
print(df1)
Alteryx.write(df1,1)
This generates 1 row with around 15,000 columns into alteryx
. Every single data value is treated as a column.
Y want the rows and columns just like a SQL table. For example, there is dict groups; the data values should be rows and the column names as column names.
Any guidance?