This site mainly deals with various use cases demonstrated using Python, Data Science, Cloud basics, SQL Server, Oracle, Teradata along with SQL & their implementation. Expecting yours active participation & time. This blog can be access from your TP, Tablet & mobile also. Please provide your feedback.
I’ll bring an exciting streamlit app that will reflect the real-time dashboard by consuming all the events from the Ably channel.
One more time, I’ll be utilizing my IoT emulator that will feed the real-time events based on the user inputs to the Ably channel, which will be subscribed to by the Streamlit-based app.
However, I would like to share the run before we dig deep into this.
Demo
Isn’t this exciting? How we can use our custom-built IoT emulator & capture real-time events to Ably Queue, then transform those raw events into more meaningful KPIs? Let’s deep dive then.
Architecture:
Let’s explore the broad-level architecture/flow –
As you can see, the green box is a demo IoT application that generates events & pushes them into the Ably Queue. At the same time, the streamlit-based Dashboard app consumes the events & transforms them into more meaningful metrics.
Package Installation:
Let us understand the sample packages that are required for this task.
Since this is an extension to our previous post, we’re not going to discuss other scripts, which we’ve already discussed over there. Instead, we will talk about the enhanced scripts & the new scripts that are required for this use case.
1. app.py (This script will consume real-time streaming data coming out from a hosted API source using another popular third-party service named Ably. Ably mimics the pub sub-streaming concept, which might be extremely useful for any start-up. This will then translate into many meaningful KPIs in a streamlit-based dashboard app.)
Note that, we’re not going to discuss the entire script here. Only those parts are relevant. However, you can get the complete scripts in the GitHub repository.
The above function creates a customized humidity gauge that visually represents a given humidity value, making it easy to read and understand at a glance.
This code defines a function “createHumidityGauge“ that creates a visual gauge (like a meter) to display a humidity value. Here’s a simple breakdown of what it does:
Function Definition: It starts by defining a function named createHumidityGauge that takes one parameter, humidity_value, which is the humidity level you want to display on the gauge.
Creating the Gauge: Inside the function, it creates a figure using Plotly (a plotting library) with a specific type of chart called an Indicator. This Indicator is set to display in “gauge+number” mode, meaning it shows both a gauge visual and the numeric value of the humidity.
Setting Gauge Properties:
The value is set to the humidity_value parameter, so the gauge shows this humidity level.
The domain sets the position of the gauge on the plot, which is set to fill the available space ([0, 1] for both x and y axes).
The title is set to “Humidity” with a font size of 24, labeling the gauge.
The gauge section defines the appearance and behavior of the gauge, including:
An axis that goes from 0 to 100 (assuming humidity is measured as a percentage from 0% to 100%).
The color and style of the gauge’s bar and background.
Colored steps indicating different ranges of humidity (cyan for 0-50% and royal blue for 50-100%).
A threshold line that appears at the value of the humidity, marked in red to stand out.
Finalizing the Gauge Appearance: The function then updates the layout of the figure to set its height, background color, font style, and margins to make sure the gauge looks nice and is visible.
Returning the Figure: Finally, the function returns the fig object, which is the fully configured gauge, ready to be displayed.
Other similar functions will repeat the same steps.
defcreateTemperatureLineChart(data): # Assuming'data'isaDataFramewitha'Timestamp'indexanda'Temperature'columnfig=px.line(data,x=data.index,y='Temperature',title='Temperature Vs Time')fig.update_layout(height=270) # Specifythedesiredheightherereturnfig
The above function takes a set of temperature data indexed by timestamp and creates a line chart that visually represents how the temperature changes over time.
This code defines a function “createTemperatureLineChart” that creates a line chart to display temperature data over time. Here’s a simple summary of what it does:
Function Definition: It starts with defining a function named “createTemperatureLineChart“ that takes one parameter, data, which is expected to be a DataFrame (a type of data structure used in pandas, a Python data analysis library). This data frame should have a ‘Timestamp’ as its index (meaning each row represents a different point in time) and a ‘Temperature’ column containing temperature values.
Creating the Line Chart: The function uses Plotly Express (a plotting library) to create a line chart with the following characteristics:
The x-axis represents time, taken from the DataFrame’s index (‘Timestamp’).
The y-axis represents temperature, taken from the ‘Temperature’ column in the DataFrame.
The chart is titled ‘Temperature Vs Time’, clearly indicating what the chart represents.
Customizing the Chart: It then updates the layout of the chart to set a specific height (270 pixels) for the chart, making it easier to view.
Returning the Chart: Finally, the function returns the fig object, which is the fully prepared line chart, ready to be displayed.
The above code will create a sidebar with drop-down lists, which will show the KPIs (“Temperature”, “Humidity”, “Pressure”).
# SplitthelayoutintocolumnsforKPIsandgraphsgauge_col,kpi_col,graph_col=st.columns(3) # Auto-refreshsetupst_autorefresh(interval=7000,key='data_refresh') # Fetchingreal-timedatadata=getData(var1,DInd)st.markdown(""" <style>.stEcharts{ margin-bottom:-50px; }/* Class might differ, inspect the HTML to find the correct class name */</style>""",unsafe_allow_html=True ) # Displaygaugesatthetopofthepagegauges=st.container()with gauges:col1,col2,col3=st.columns(3)with col1:humidity_value=round(data['Humidity'].iloc[-1],2)humidity_gauge_fig=createHumidityGauge(humidity_value)st.plotly_chart(humidity_gauge_fig,use_container_width=True)with col2:temp_value=round(data['Temperature'].iloc[-1],2)temp_gauge_fig=createTempGauge(temp_value)st.plotly_chart(temp_gauge_fig,use_container_width=True)with col3:pressure_value=round(data['Pressure'].iloc[-1],2)pressure_gauge_fig=createPressureGauge(pressure_value)st.plotly_chart(pressure_gauge_fig,use_container_width=True) # Nextrowforactualreadingsandchartsside-by-sidereadings_charts=st.container() # DisplayKPIsandtheirtrendswith readings_charts:readings_col,graph_col=st.columns([1,2])with readings_col:st.subheader("Latest Readings")if"Temperature"in selected_kpis:st.metric("Temperature",f"{temp_value:.2f}%")if"Humidity"in selected_kpis:st.metric("Humidity",f"{humidity_value:.2f}%")if"Pressure"in selected_kpis:st.metric("Pressure",f"{pressure_value:.2f}%") # GraphplaceholdersforeachKPIwith graph_col:if"Temperature"in selected_kpis:temperature_fig=createTemperatureLineChart(data.set_index("Timestamp")) # DisplaythePlotlychartinStreamlitwithspecifieddimensionsst.plotly_chart(temperature_fig,use_container_width=True)if"Humidity"in selected_kpis:humidity_fig=createHumidityLineChart(data.set_index("Timestamp")) # DisplaythePlotlychartinStreamlitwithspecifieddimensionsst.plotly_chart(humidity_fig,use_container_width=True)if"Pressure"in selected_kpis:pressure_fig=createPressureLineChart(data.set_index("Timestamp")) # DisplaythePlotlychartinStreamlitwithspecifieddimensionsst.plotly_chart(pressure_fig,use_container_width=True)
The code begins by splitting the Streamlit web page layout into three columns to separately display Key Performance Indicators (KPIs), gauges, and graphs.
It sets up an auto-refresh feature with a 7-second interval, ensuring the data displayed is regularly updated without manual refreshes.
Real-time data is fetched using a function called getData, which takes unspecified parameters var1 and DInd.
A CSS style is injected into the Streamlit page to adjust the margin of Echarts elements, which may be used to improve the visual layout of the page.
A container for gauges is created at the top of the page, with three columns inside it dedicated to displaying humidity, temperature, and pressure gauges.
Each gauge (humidity, temperature, and pressure) is created by rounding the last value from the fetched data to two decimal places and then visualized using respective functions that create Plotly gauge charts.
Below the gauges, another container is set up for displaying the latest readings and their corresponding graphs in a side-by-side layout, using two columns.
The left column under “Latest Readings” displays the latest values for selected KPIs (temperature, humidity, pressure) as metrics.
In the right column, for each selected KPI, a line chart is created using data with timestamps as indices and displayed using Plotly charts, allowing for a visual trend analysis.
This structured approach enables a dynamic and interactive dashboard within Streamlit, offering real-time insights into temperature, humidity, and pressure with both numeric metrics and graphical trends, optimized for regular data refreshes and user interactivity.
Run:
Let us understand some of the important screenshots of this application –
So, we’ve done it.
I’ll bring some more exciting topics in the coming days from the Python verse.
Till then, Happy Avenging! 🙂
Note: All the data & scenarios posted here are representational data & scenarios & available over the internet & for educational purposes only.
There is always room for improvement in this kind of model & the solution associated with it. I’ve shown the basic ways to achieve the same for educational purposes only.
Today, We want to make our use case a little bit harder & more realistic. We want to consume real-time live trade-data consuming through FinnHub API & displaying them into our dashboard using another brilliant H2O-Wave API with the help of native Python.
The use-case mentioned above is extremely useful & for that, we’ll be using the following Third-Party APIs to achieve the same –
FinnHub:For more information, please click the following link.
Ably: For more information, please click the following link.
H2O-Wave: For more information, please click the following link.
I’m not going to discuss these topics more, as I’ve already discussed them in separate earlier posts. Please refer to the following threads for detailed level information –
In this post, we will address the advanced concept compared to the previous post mentioned above. Let us first look at how the run looks before we start exploring the details –
Real-time trade dashboard
Let us explore the architecture of this implementation –
Architecture Diagram
This application will talk to the FinnHub websocket & consume real-time trade data from it. And this will be temporarily stored in our Ably channels. The dashboard will pick the message & display that as soon as there is new data for that trading company.
For this use case, you need to install the following packages –
STEP – 1:
Main Packages
STEP – 2:
Main Packages – Continue
STEP – 3:
Main Packages – Continue
STEP – 4:
Main Packages – End
You can copy the following commands to install the above-mentioned packages –
Let’s explore the important data-point that you need to capture from the FinnHub portal to consume the real-time trade data –
FinnHub Portal
We’ve two main scripts. The first script will consume the streaming data into a message queue & the other one will be extracting the data from the queue & transform the data & publish it into the real-time dashboard.
1. dashboard_finnhub.py ( This native Python script will consume streaming data & create the live trade dashboard. )
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Let’s explore the key snippets from the above script –
def process_DF(inputDF, inputDFUnq):
try:
# Core Business logic
# The application will show default value to any
# trade-in stock in case that data doesn't consume
# from the source.
# Getting block count
#df_conv['block_count'] = df_conv.groupby(['default_rank']).cumcount()
#l.logr('3. block_df.csv', 'Y', df_conv, subdir)
# Getting block count
#df_conv['max_count'] = df_conv.groupby(['default_rank']).size()
#df_conv_fin = df_conv.groupby(['default_rank']).agg(['count'])
#df_conv_fin = df_conv.value_counts(['default_rank']).reset_index(name='max_count')
#df_conv_fin = df_conv.value_counts(['default_rank'])
df_conv = inputDF
df_unique_fin = inputDFUnq
df_conv['max_count'] = df_conv.groupby('default_rank')['default_rank'].transform('count')
l.logr('3. max_df.csv', 'Y', df_conv, subdir)
# Sorting the output
sorted_df = df_conv.sort_values(by=['default_rank','s'], ascending=True)
# New Column List Orders
column_order = ['s', 'default_rank', 'max_count', 'p', 't', 'v']
df_fin = sorted_df.reindex(column_order, axis=1)
l.logr('4. sorted_df.csv', 'Y', df_fin, subdir)
# Now splitting the sorted df into two sets
lkp_max_count = 4
df_fin_na = df_fin[(df_fin['max_count'] == lkp_max_count)]
l.logr('5. df_fin_na.csv', 'Y', df_fin_na, subdir)
df_fin_req = df_fin[(df_fin['max_count'] != lkp_max_count)]
l.logr('6. df_fin_req.csv', 'Y', df_fin_req, subdir)
# Now to perform cross join, we will create
# a key column in both the DataFrames to
# merge on that key.
df_unique_fin['key'] = 1
df_fin_req['key'] = 1
# Dropping unwanted columns
df_unique_fin.drop(columns=['t'], axis=1, inplace=True)
l.logr('7. df_unique_slim.csv', 'Y', df_unique_fin, subdir)
# Padding with dummy key values
#merge_df = p.merge(df_unique_fin,df_fin_req,on=['s'],how='left')
merge_df = p.merge(df_unique_fin,df_fin_req,on=['key']).drop("key", 1)
l.logr('8. merge_df.csv', 'Y', merge_df, subdir)
# Sorting the output
sorted_merge_df = merge_df.sort_values(by=['default_rank_y','s_x'], ascending=True)
l.logr('9. sorted_merge_df.csv', 'Y', sorted_merge_df, subdir)
# Calling new derived logic
sorted_merge_df['derived_p'] = sorted_merge_df.apply(lambda row: calc_p(row), axis=1)
sorted_merge_df['derived_v'] = sorted_merge_df.apply(lambda row: calc_v(row), axis=1)
l.logr('10. sorted_merge_derived.csv', 'Y', sorted_merge_df, subdir)
# Dropping unwanted columns
sorted_merge_df.drop(columns=['default_rank_x', 'p_x', 'v_x', 's_y', 'p_y', 'v_y'], axis=1, inplace=True)
#Renaming the columns
sorted_merge_df.rename(columns={'s_x':'s'}, inplace=True)
sorted_merge_df.rename(columns={'default_rank_y':'default_rank'}, inplace=True)
sorted_merge_df.rename(columns={'derived_p':'p'}, inplace=True)
sorted_merge_df.rename(columns={'derived_v':'v'}, inplace=True)
l.logr('11. org_merge_derived.csv', 'Y', sorted_merge_df, subdir)
# Aligning columns
column_order = ['s', 'default_rank', 'max_count', 'p', 't', 'v']
merge_fin_df = sorted_merge_df.reindex(column_order, axis=1)
l.logr('12. merge_fin_df.csv', 'Y', merge_fin_df, subdir)
# Finally, appending these two DataFrame (df_fin_na & merge_fin_df)
frames = [df_fin_na, merge_fin_df]
fin_df = p.concat(frames, keys=["s", "default_rank", "max_count"])
l.logr('13. fin_df.csv', 'Y', fin_df, subdir)
# Final clearance & organization
fin_df.drop(columns=['default_rank', 'max_count'], axis=1, inplace=True)
l.logr('14. Final.csv', 'Y', fin_df, subdir)
# Adjusting key columns
fin_df.rename(columns={'s':'Company'}, inplace=True)
fin_df.rename(columns={'p':'CurrentExchange'}, inplace=True)
fin_df.rename(columns={'v':'Change'}, inplace=True)
l.logr('15. TransormedFinal.csv', 'Y', fin_df, subdir)
return fin_df
except Exception as e:
print('$' * 120)
x = str(e)
print(x)
print('$' * 120)
df = p.DataFrame()
return df
The above function will check if the queue is sending all the key trade-in data for all the companies. In our use case, we’re testing with the four companies & they are as follows –
a. AAPL
b. AMZN
c. BINANCE:BTCUSDT
d. IC MARKETS:1
Every message is containing data from all of these four companies together. If any of the company’s data is missing, this transformation will add a dummy record of that missing company to make the uniform number of entries in each message bouquet. And dummy trade-in values added for all the missing information.
In the above snippet, the application will consume the streaming data from the Ably queue.
for i in message_page.items:
print('Last Msg: {}'.format(i.data))
json_data = json.loads(i.data)
# Converting JSON to Dataframe
df = p.json_normalize(json_data)
df.columns = df.columns.map(lambda x: x.split(".")[-1])
if cnt == 0:
df_conv = df
else:
d_frames = [df_conv, df]
df_conv = p.concat(d_frames)
cnt += 1
The above snippet will convert the streaming messages to a more meaningful pandas data-frame, which we can use for a wide variety of analytics.
# Converting dataframe to a desired Series
f = CategoricalSeries(fin_df)
for j in range(count_row):
# Getting the series values from above
cat, val, pc = f.next()
# Getting Individual Element & convert them to Series
if ((start_pos + interval) <= count_row):
end_pos = start_pos + interval
else:
end_pos = start_pos + (count_row - start_pos)
split_df = df_unq_finale.iloc[start_pos:end_pos]
if ((start_pos > count_row) | (start_pos == count_row)):
pass
else:
start_pos = start_pos + interval
x_currency = str(split_df.iloc[0]['Company'])
####################################################
##### Debug Purpose #########
####################################################
print('Company: ', x_currency)
print('J: ', str(j))
print('Cat: ', cat)
####################################################
##### End Of Debug #######
####################################################
c = page.add(f'e{j+1}', ui.tall_series_stat_card(
box=f'{j+1} 1 1 2',
title=x_currency,
value='=${{intl qux minimum_fraction_digits=2 maximum_fraction_digits=2}}',
aux_value='={{intl quux style="percent" minimum_fraction_digits=1 maximum_fraction_digits=1}}',
data=dict(qux=val, quux=pc),
plot_type='area',
plot_category='foo',
plot_value='qux',
plot_color=next_color(),
plot_data=data('foo qux', -15),
plot_zero_value=0,
plot_curve=next_curve(),
))
large_lines.append((f, c))
page.save()
while update_freq > 0:
time.sleep(update_freq)
for f, c in large_lines:
cat, val, pc = f.next()
print('Update Cat: ', cat)
print('Update Val: ', val)
print('Update pc: ', pc)
print('*' * 160)
c.data.qux = val
c.data.quux = pc / 100
c.plot_data[-1] = [cat, val]
page.save()
The above snippet will consume the data into H2O-Wave driven framework, which will expose this data into beautiful & easily representable GUI-based solutions through an interactive dashboard.
2. publish_ably_mod.py ( This native Python script will consume streaming data into Ably message Queue )
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
As we already discussed, we’ll pass a default set of data for all the candidate companies.
# Publish a message to the sd_channel channel
channel.publish('event', jdata)
# Publish rest of the messages to the sd_channel channel
channel.publish('event', jdata_dyn)
Send the company-specific trade queries through websocket apps to submit that to FinnHub.
3. clsConfig.py ( This file contains the configuration details. )
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Today, I’ll demonstrate one of the fascinating ways to capture real-time streaming data in a dashboard. It is a dream for any developer who wants to build an application involving streaming data, API & a dashboard.
Why don’t we see our run to make this thread more interesting?
Real-Time Dashboard using streaming data
Today, I’ll be using the two most essential services to achieve that goal.
Ably
H2O-Wave
Let’s discuss brief about these two services.
Why I used “Ably” here?
One of my scenarios is to consume real-time currency data. Even after checking paid-API, I was not getting what I was looking for. Hence, I decided to use any service, which can mimics & publish my data as streaming data through a channel. Once published, I’ll consume the posted data into my application to create this new dashboard.
Using Ably, you can leverage their cloud platform to publish & consume data with the free developer account, which is sufficient for anyone.
To better understand this, we need to understand the basic concept of “pubsub”. Here is the important page from their side that I would like to embed for your reference –
Source: Ably
To know more about this, please refer to the following link.
Why I used “H2O-Wave” here?
Wave_H2O is a relatively brand new framework with some outstanding capabilities to visualize your data using native Python.
Pre-Steps:
We need to register Ably. Some of the useful screen that we should explore more –
API-Key Page
Successful creation of an App will generate the API-Key. Make sure that you note-down the channel details as well.
Quota Limit
The above page will capture the details of usage. Since this is a free subscription, you will be blocked once you consume your limit. However, for paid users, this is one of the vital pages to control their budget.
Message Published & Consumption Visuals
Like any other cloud service, you can check your message published or consumptions here on this page.
This is the main landing page for H2O-Wave –
H2O Wave
They have a quite many example snippet. However, these samples contain random data. Hence, these are relatively easier to implement. It would take quite some effort to tailor it for your need to implement that for real-life scenarios.
You need to install the following libraries in Python –
pip install ably
pip install h2o-wave
We’ve two scripts. We’re not going to discuss the publish streaming data script over here. We’ll be discussing only the consumption script, which will generate the dashboard as well. If you need, you can post your message. I’ll provide it.
1. dashboard_st.py ( This native Python script will consume streaming data & create live dashboard. )
##########################################################
#### Template Written By: H2O Wave ####
#### Enhanced with Streaming Data By: Satyaki De ####
#### Base Version Enhancement On: 20-Dec-2020 ####
#### Modified On 26-Dec-2020 ####
#### ####
#### Objective: This script will consume real-time ####
#### streaming data coming out from a hosted API ####
#### sources using another popular third-party ####
#### service named Ably. Ably mimics pubsub Streaming ####
#### concept, which might be extremely useful for ####
#### any start-ups. ####
##########################################################
import time
from h2o_wave import site, data, ui
from ably import AblyRest
import pandas as p
import json
class DaSeries:
def __init__(self, inputDf):
self.Df = inputDf
self.count_row = inputDf.shape[0]
self.start_pos = 0
self.end_pos = 0
self.interval = 1
def next(self):
try:
# Getting Individual Element & convert them to Series
if ((self.start_pos + self.interval) <= self.count_row):
self.end_pos = self.start_pos + self.interval
else:
self.end_pos = self.start_pos + (self.count_row - self.start_pos)
split_df = self.Df.iloc[self.start_pos:self.end_pos]
if ((self.start_pos > self.count_row) | (self.start_pos == self.count_row)):
pass
else:
self.start_pos = self.start_pos + self.interval
x = float(split_df.iloc[0]['CurrentExchange'])
dx = float(split_df.iloc[0]['Change'])
# Emptying the exisitng dataframe
split_df = p.DataFrame(None)
return x, dx
except:
x = 0
dx = 0
return x, dx
class CategoricalSeries:
def __init__(self, sourceDf):
self.series = DaSeries(sourceDf)
self.i = 0
def next(self):
x, dx = self.series.next()
self.i += 1
return f'C{self.i}', x, dx
light_theme_colors = '$red $pink $purple $violet $indigo $blue $azure $cyan $teal $mint $green $amber $orange $tangerine'.split()
dark_theme_colors = '$red $pink $blue $azure $cyan $teal $mint $green $lime $yellow $amber $orange $tangerine'.split()
_color_index = -1
colors = dark_theme_colors
def next_color():
global _color_index
_color_index += 1
return colors[_color_index % len(colors)]
_curve_index = -1
curves = 'linear smooth step stepAfter stepBefore'.split()
def next_curve():
global _curve_index
_curve_index += 1
return curves[_curve_index % len(curves)]
def create_dashboard(update_freq=0.0):
page = site['/dashboard_st']
# Fetching the data
client = AblyRest('XXXXX.YYYYYY:94384jjdhdh98kiidLO')
channel = client.channels.get('sd_channel')
message_page = channel.history()
# Counter Value
cnt = 0
# Declaring Global Data-Frame
df_conv = p.DataFrame()
for i in message_page.items:
print('Last Msg: {}'.format(i.data))
json_data = json.loads(i.data)
# Converting JSON to Dataframe
df = p.json_normalize(json_data)
df.columns = df.columns.map(lambda x: x.split(".")[-1])
if cnt == 0:
df_conv = df
else:
d_frames = [df_conv, df]
df_conv = p.concat(d_frames)
cnt += 1
# Resetting the Index Value
df_conv.reset_index(drop=True, inplace=True)
print('DF:')
print(df_conv)
df_conv['default_rank'] = df_conv.groupby(['Currency']).cumcount() + 1
lkp_rank = 1
df_unique = df_conv[(df_conv['default_rank'] == lkp_rank)]
print('Rank DF Unique:')
print(df_unique)
count_row = df_unique.shape[0]
large_lines = []
start_pos = 0
end_pos = 0
interval = 1
# Converting dataframe to a desired Series
f = CategoricalSeries(df_conv)
for j in range(count_row):
# Getting the series values from above
cat, val, pc = f.next()
# Getting Individual Element & convert them to Series
if ((start_pos + interval) <= count_row):
end_pos = start_pos + interval
else:
end_pos = start_pos + (count_row - start_pos)
split_df = df_unique.iloc[start_pos:end_pos]
if ((start_pos > count_row) | (start_pos == count_row)):
pass
else:
start_pos = start_pos + interval
x_currency = str(split_df.iloc[0]['Currency'])
c = page.add(f'e{j+1}', ui.tall_series_stat_card(
box=f'{j+1} 1 1 2',
title=x_currency,
value='=${{intl qux minimum_fraction_digits=2 maximum_fraction_digits=2}}',
aux_value='={{intl quux style="percent" minimum_fraction_digits=1 maximum_fraction_digits=1}}',
data=dict(qux=val, quux=pc),
plot_type='area',
plot_category='foo',
plot_value='qux',
plot_color=next_color(),
plot_data=data('foo qux', -15),
plot_zero_value=0,
plot_curve=next_curve(),
))
large_lines.append((f, c))
page.save()
while update_freq > 0:
time.sleep(update_freq)
for f, c in large_lines:
cat, val, pc = f.next()
c.data.qux = val
c.data.quux = pc / 100
c.plot_data[-1] = [cat, val]
page.save()
create_dashboard(update_freq=0.25)
Some of the key snippets from the above codes are –
The above snippet will create a series of data out of a pandas data frame. It will consume, one-by-one record & then pass it to the dashboard for real-time updates.
# Fetching the data
client = AblyRest('XXXXX.YYYYYY:94384jjdhdh98kiidLO')
channel = client.channels.get('sd_channel')
message_page = channel.history()
In the above code, the application will consume the real-time data out of Ably’s channel.
In the above code, the application is uniquely identifying the first instance of currency entries, which will be passed to the initial dashboard page before consuming the array of updates.
f = CategoricalSeries(df_conv)
In the above code, the application is creating an instance of the intended categorical series.
The above code is a standard way to bind the streaming data with the H2O-Wave dashboard.
while update_freq > 0:
time.sleep(update_freq)
for f, c in large_lines:
cat, val, pc = f.next()
c.data.qux = val
c.data.quux = pc / 100
c.plot_data[-1] = [cat, val]
page.save()
Here are the last few snippet lines that will capture the continuous streaming data & keep updating the numbers on your dashboard.
Since I’ve already provided the run video of my application, here are a few important screens –
Case 1:
Wave Server Start Command
Case 2:
Publishing stream data
Case 3:
Consuming Stream Data & Publishing to Dashboard
Case 4:
Dashboard Data
So, finally, we have done it.
You will get the complete codebase in the following Github link.
I’ll bring some more exciting topic in the coming days from the Python verse.
Till then, Happy Avenging! 😀
Note: All the data & scenario posted here are representational data & scenarios & available over the internet & for educational purpose only.
You must be logged in to post a comment.