Hi Guys!
Here is the post as to how to call this Dnpr library & what are the current limitations of this library.
Before we start let’s post the calling script & then explain how we can use them –
############################################## #### Written By: SATYAKI DE #### #### Written On: 08-Sep-2019 #### #### #### #### Objective: Main calling scripts. #### ############################################## from dnpr.clsDnpr import clsDnpr import datetime as dt import json # Disbling Warning def warn(*args, **kwargs): pass import warnings warnings.warn = warn # Lookup functions from def main(): try: srcJson = [ {"FirstName": "Satyaki", "LastName": "De", "Sal": 1000}, {"FirstName": "Satyaki", "LastName": "De", "Sal": 1000}, {"FirstName": "Archi", "LastName": "Bose", "Sal": 500}, {"FirstName": "Archi", "LastName": "Bose", "Sal": 7000}, {"FirstName": "Deb", "LastName": "Sen", "Sal": 9500} ] print("=" * 157) print("Checking distinct function!") print("=" * 157) print() print("*" * 157) print("Input Data: ") srcJsonFormat = json.dumps(srcJson, indent=1) print(str(srcJsonFormat)) print("*" * 157) # Initializing the class t = clsDnpr() print("1. Checking distinct functionality!") var1 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("Start Time: ", str(var1)) # Invoking the distinct function tarJson = t.distinct(srcJson) print("*" * 157) print("Output Data: ") tarJsonFormat = json.dumps(tarJson, indent=1) print(str(tarJsonFormat)) print("*" * 157) if not tarJson: print() print("No relevant output data!") print("*" * 157) else: print() print("Relevant output data comes!") print("*" * 157) var2 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("End Time: ", str(var2)) print("=" * 157) print("End of distinct function!") print("=" * 157) print("2. Checking nvl functionality!") srcJson_1 = [ {"FirstName": "Satyaki", "LastName": "", "Sal": 1000}, {"FirstName": "Archi", "LastName": "Bose", "Sal": 500}, {"FirstName": "Deb", "LastName": "", "Sal": 9500} ] var3 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("Start Time: ", str(var3)) strDef = 'FNU' print("Default Value: ", strDef) srcColName = 'LastName' print('Candidate Column for NVL: ', srcColName) # Invoking the nvl function tarJson_1 = t.nvl(srcJson_1, srcColName, strDef) print("*" * 157) print("Output Data: ") tarJsonFormat_1 = json.dumps(tarJson_1, indent=1) print(str(tarJsonFormat_1)) print("*" * 157) if not tarJson_1: print() print("No relevant output data!") print("*" * 157) else: print() print("Relevant output data comes!") print("*" * 157) var4 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("End Time: ", str(var4)) print("=" * 157) print("End of nvl function!") print("=" * 157) print("3. Checking partition-by functionality!") srcJson_2 = [ {"FirstName": "Satyaki", "LastName": "", "Sal": 1000}, {"FirstName": "Satyaki", "LastName": "", "Sal": 700}, {"FirstName": "Archi", "LastName": "Bose", "Sal": 500}, {"FirstName": "Deb", "LastName": "", "Sal": 9500}, {"FirstName": "Archi", "LastName": "Bose", "Sal": 4500}, ] var5 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("Start Time: ", str(var5)) GrList = ['FirstName', 'LastName'] print("Partition By Columns::: ", str(GrList)) grOperation = 'Max' print('Operation toe be performed: ', grOperation) strCandidateColumnName = 'Sal' print('Column Name on which the aggregate function will take place: ', strCandidateColumnName) # Invoking the partition by function - MAX tarJson_1 = t.partitionBy(srcJson_2, GrList, grOperation, strCandidateColumnName) print("*" * 157) print("Output Data: ") tarJsonFormat_1 = json.dumps(tarJson_1, indent=1) print(str(tarJsonFormat_1)) print("*" * 157) if not tarJson_1: print() print("No relevant output data!") print("*" * 157) else: print() print("Relevant output data comes!") print("*" * 157) var6 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("End Time: ", str(var6)) var7 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("Start Time: ", str(var7)) grOperation_1 = 'Min' print('Operation toe be performed: ', grOperation_1) # Invoking the Partition By function - MIN tarJson_2 = t.partitionBy(srcJson_2, GrList, grOperation_1, strCandidateColumnName) print("*" * 157) print("Output Data: ") tarJsonFormat_2 = json.dumps(tarJson_2, indent=1) print(str(tarJsonFormat_2)) print("*" * 157) if not tarJson_2: print() print("No relevant output data!") print("*" * 157) else: print() print("Relevant output data comes!") print("*" * 157) var8 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("End Time: ", str(var8)) var9 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("Start Time: ", str(var9)) grOperation_2 = 'Avg' print('Operation toe be performed: ', grOperation_2) # Invoking the Partition By function - Avg tarJson_3 = t.partitionBy(srcJson_2, GrList, grOperation_2, strCandidateColumnName) print("*" * 157) print("Output Data: ") tarJsonFormat_3 = json.dumps(tarJson_3, indent=1) print(str(tarJsonFormat_3)) print("*" * 157) if not tarJson_3: print() print("No relevant output data!") print("*" * 157) else: print() print("Relevant output data comes!") print("*" * 157) var10 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("End Time: ", str(var10)) var11 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("Start Time: ", str(var11)) grOperation_3 = 'Sum' print('Operation toe be performed: ', grOperation_3) # Invoking the Partition By function - Sum tarJson_4 = t.partitionBy(srcJson_2, GrList, grOperation_3, strCandidateColumnName) print("*" * 157) print("Output Data: ") tarJsonFormat_4 = json.dumps(tarJson_4, indent=1) print(str(tarJsonFormat_4)) print("*" * 157) if not tarJson_4: print() print("No relevant output data!") print("*" * 157) else: print() print("Relevant output data comes!") print("*" * 157) var12 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("End Time: ", str(var12)) print("=" * 157) print("End of partition function!") print("=" * 157) print("4. Checking regular expression functionality!") print() var13 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("Start Time: ", str(var13)) print('::Function Regex_Like:: ') print() tarColumn = 'FirstName' print('Target Column for Rexex_Like: ', tarColumn) inpPattern = r"\bSa" print('Input Pattern: ', str(inpPattern)) # Invoking the regex_like function tarJson = t.regex_like(srcJson, tarColumn, inpPattern) print('End of Function Regex_Like!') print() print("*" * 157) print("Output Data: ") tarJsonFormat = json.dumps(tarJson, indent=1) print(str(tarJsonFormat)) print("*" * 157) if not tarJson: print() print("No relevant output data!") print("*" * 157) else: print() print("Relevant output data comes!") print("*" * 157) var14 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("End Time: ", str(var14)) var15 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("Start Time: ", str(var15)) print('::Function Regex_Replace:: ') print() tarColumn = 'FirstName' print('Target Column for Rexex_Replace: ', tarColumn) inpPattern = r"\bSa" print('Input Pattern: ', str(inpPattern)) replaceString = 'Ka' print('Replacing Character: ', replaceString) # Invoking the regex_replace function tarJson = t.regex_replace(srcJson, tarColumn, inpPattern, replaceString) print('End of Function Rexex_Replace!') print() print("*" * 157) print("Output Data: ") tarJsonFormat = json.dumps(tarJson, indent=1) print(str(tarJsonFormat)) print("*" * 157) if not tarJson: print() print("No relevant output data!") print("*" * 157) else: print() print("Relevant output data comes!") print("*" * 157) var16 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("End Time: ", str(var16)) var17 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("Start Time: ", str(var17)) print('::Function Regex_Substr:: ') print() tarColumn = 'FirstName' print('Target Column for Regex_Substr: ', tarColumn) inpPattern = r"\bSa" print('Input Pattern: ', str(inpPattern)) # Invoking the regex_substr function tarJson = t.regex_substr(srcJson, tarColumn, inpPattern) print('End of Function Regex_Substr!') print() print("*" * 157) print("Output Data: ") tarJsonFormat = json.dumps(tarJson, indent=1) print(str(tarJsonFormat)) print("*" * 157) if not tarJson: print() print("No relevant output data!") print("*" * 157) else: print() print("Relevant output data comes!") print("*" * 157) var18 = dt.datetime.now().strftime("%Y-%m-%d %H-%M-%S") print("End Time: ", str(var18)) print("=" * 157) print("End of regular expression function!") print("=" * 157) except ValueError: print("No relevant data to proceed!") except Exception as e: print("Top level Error: args:{0}, message{1}".format(e.args, e.message)) if __name__ == "__main__": main()
Let’s explain the key lines –
As of now, the source payload that it will support is mostly simple JSON.
As you can see, we’ve relatively started with the simple JSON containing an array of elements.
# Initializing the class
t = clsDnpr()
In this line, you can initiate the main library.
Let’s explore the different functions, which you can use on JSON.
1. Distinct:
Let’s discuss the distinct function on JSON. This function can be extremely useful if you use NoSQL, which doesn’t offer any distinct features. Or, if you are dealing with or expecting your source with duplicate JSON inputs.
Let’s check our sample payload for distinct –

Here is the basic syntax & argument that it is expecting –
distinct(Input Json) return
Output Json
So, all you have to ensure that you are passing a JSON input string.
As per our example –
# Invoking the distinct function
tarJson = t.distinct(srcJson)
And, here is the output –

If you compare the source JSON. You would have noticed that there are two identical entries with the name “Satyaki” is now replaced by one unique entries.
Limitation: Currently, this will support only basic JSON. However, I’m working on it to support that much more complex hierarchical JSON in coming days.
2. NVL:
NVL is another feature that I guess platform like JSON should have. So, I built this library specially handles the NULL data scenario, where the developer may want to pass a default value in place of NULL.
Hence, the implementation of this function.
Here is the sample payload for this –

In this case, if there is some business logic that requires null values replaced with some default value for LastName say e.g. FNU. This function will help you to implement that logic.
Here is the basic syntax & argument that it is expecting –
nvl( Input Json, Prospective Null Column Name, Dafult Value in case of Null ) return Output Json
And, here is the code implementation –
strDef = ‘FNU’
print(“Default Value: “, strDef)
srcColName = ‘LastName’
print(‘Candidate Column for NVL: ‘, srcColName)# Invoking the nvl function
tarJson_1 = t.nvl(srcJson_1, srcColName, strDef)
So, in the above lines, this code will replace the Null value with the “FNU” for the column LastName.
And, Here is the output –

3. Partition_By:
I personally like this function as this gives more power to manipulate any data in JSON levels such as Avg, Min, Max or Sum. This might be very useful in order to implement some basic aggregation on the fly.
Here is the basic syntax & argument that it is expecting –
partition_by( Input Json, Group By Column List, Group By Operation, Candidate Column Name, Output Column Name ) return Output Json
Now, we would explore the sample payload for all these functions to test –

Case 1:
In this case, we’ll calculate the maximum salary against FirstName & LastName. However, I want to print the Location in my final JSON output.
So, if you see the sample data & let’s make it tabular for better understanding –

So, as per our business logic, our MAX aggregate would operate only on FirstName & LastName. Hence, the calculation will process accordingly.
In that case, the output will look something like –

As you can see, from the above picture two things happen. It will remove any duplicate entries. In this case, Satyaki has exactly two identical rows. So, it removes one. However, as part of partition by clause, it keeps two entries of Archi as the location is different. Deb will be appearing once as expected.
Let’s run our application & find out the output –

So, we meet our expectation.
Case 2:
Same, logic will be applicable for Min as well.
Hence, as per the table, we should expect the output as –

And, the output of our application run is –

So, this also come as expected.
Case 3:
Let’s check for average –

The only thing I wanted to point out, as we’ve two separate entries for Satyaki. So, the average will contain the salary from both the value as rightfully so. Hence, the average of (1000+700)/2 = 850.
Let’s run our application –

So, we’ve achieved our target.
Case 4:
Let’s check for Sum.

Now, let’s run our application –

In the next installment, we’ll be discussing the last function from this package i.e. Regular Expression in JSON.
I hope, you’ll like this presentation.
Let me know – if you find any special bug. I’ll look into that.
Till then – Happy Avenging!
Note: All the data posted here are representational data & available over the internet & for educational purpose only.
You must be logged in to post a comment.