1 year ago
#386492
DataYoda
parallel sql query in bigquery (mix legacy and standard) running on metadata schema. (each row is a different table/column)
df_table contains meta data for the some list of columns with information like: table_schema, table_name and column_name. for each column in column_name, I would like to calculate entropy (bits) , shannon_entropy and count of values.
the following code works good in python, but it is not parallel. I wonder if more efficient way to run this :
job_config_True = bigquery.QueryJobConfig(use_legacy_sql=True)
job_config_False = bigquery.QueryJobConfig(use_legacy_sql=False)
for i,j in df_table[df_table['shannon_entropy'].isna()].iterrows():
try:
table_schema = (j['table_schema'])
table_name = (j['table_name'])
column_name = (j['column_name'])
q1 = f'''select -sum(p*log2(p)) as shannon_entropy from (
select RATIO_TO_REPORT(c) over() p from (
select {column_name}, count(*) c FROM {table_schema}.{table_name} group by 1))
'''
query_job = bqclient.query(q1, job_config=job_config_True) # Make an API request.
shannon_entropy = query_job.result().to_dataframe()['shannon_entropy'][0]
except:
shannon_entropy = np.nan
pass
q = f'''UPDATE `myproject.info_tabels_all` t1
set t1.entropy =t2.entropy ,t1.values = t2.total , t1.unique = t2.distinct_total , t1.shannon_entropy = {shannon_entropy}
from (
SELECT
LOG(2, COUNT(DISTINCT {column_name})) as entropy,
count({column_name}) as total,
COUNT(DISTINCT {column_name}) as distinct_total
FROM `datateam-248616.{table_schema}.{table_name}` ) t2
where table_schema = '{table_schema}' and table_name = '{table_name}' and column_name = '{column_name}'
'''
print( table_name , shannon_entropy)
query_job = bqclient.query(q, job_config_False) # Make an API request.
I used this code in the process : BigQuery: compute entropy of a column
python
sql
google-bigquery
entropy
0 Answers
Your Answer