The Tale of a Report

Tuesday, August 30, 2016 by Alexander Huber

Time cockpit already offers extensive reporting functionality using SQL Server Reporting Services. With Reporting Services you can create your own nicely formatted reports that can be filled with your time cockpit data.
However, there are rather advanced scenarios where the built-in reporting capabilities alone do not go far enough. Recently, we refactored a crucial report for one of our long-term customer. In this article we share the main challenges and how we solved them.

Performance

The report that was refactored contains important organizational KPIs. The calculations done for the report require quite a lot of resources. This is because

  • The report needs to take into account all time sheet entries of a given year. 
  • Further, the report could be generated for different aggregation levels namely per user, per subsidiary and a subsidiary overview. Since the calculations and aggregations where done in a conventional time cockpit Iron Python list, the calculations took their time, often over 1 minute.
  • Last, the report became slower as the year passes by, because the time sheet entries steadily grow in numbers.

Interoperability

Besides time cockpit, our customer also uses Tableau for data analysis and reporting. However, the report we refactored heavily relied on domain-specific functions. For example, the report used time cockpit functions that help to determine the actual and planned hours of work taking into account all the absence time and the working time models. Due to their complexity, these functions are not offered via the time cockpit Web API. Thus, it was not possible to access time cockpit’s database directly with Tableau and rebuild the report there. The benefit of Tableau over time cockpit is that you can define powerful dynamic reports there.

On-Demand vs. Periodic Execution

Originally, the data of the report was calculated and the report was generated on the fly. This approach was OK for a small amount of data, but did not scale very well. It took a long time to generate. What was needed was an approach that allowed us to execute the same calculations on-demand and periodically without redundancies.

Solution

To tackle the above issues we implemented the solution architecture depicted in the figure above. The main components are:

  • A new time cockpit action
  • A dedicated database table for the generated data
  • A time cockpit aggregation list
  • Azure Functions
  • The time cockpit Web API

Time Cockpit Action

To make the algorithm callable by an external component, we migrated the source code of the Iron Python list that calculated the data for the report into a dedicated action. Basically, the action does the same calculations as the list, but saves the results to dedicated database table. The only difference to the original algorithm is that the action generates the data on the lowest aggregation level.
Because we encapsulated the algorithm to generate the data for the report, we were able to use the domain logic of time cockpit. If the domain logic is updated for whatever reasons (bugfixes, patches, additional functionality), our customer will profit from it.

Dedicated Database Table

As mentioned above we implemented an action that performs the calculations for the report. It persists its results in a dedicated database table.
By creating a dedicated table our customer is now able to define reports in Tableau (or other reporting tools like Power BI) with the data prepared by time cockpit. The customer benefits from both, time cockpits domain logic and the reporting power of Tableau.

Time Cockpit Aggregation List

Now, the action that runs the calculations for the report calculates the results on the most granular level (that is employee). However, originally the report was able to display data aggregated by employee, subsidiary and subsidiaries overall.

To achieve that, we created a custom list on top of the dedicated database table (see Customizing Lists). The list provides the same filter parameter as the original one and does the aggregation of the granular values by the aggregation level requested by the user.

This is also the list where the original SSRS report is attached to. The nice thing about this time cockpit list is that it "abstracts away" the actual structure of the dedicated result table. Thus, we were able to largely reuse the original SSRS report without bigger alterations.

Azure Functions

To tackle the performance issues, we decided to pre-generate the data that is used by the SSRS report. The calculations are now done in Microsoft Azure  using a so called Azure Function. Basically, the Azure function just triggers the time cockpit action that calculates the data for the report.

With Azure Functions, you can either script things (https://msdn.microsoft.com/en-us/magazine/mt614271.aspx), or call .NET class libraries. For the ease of development, we implemented a small .NET class library that encapsulates the Web API calls to the time cockpit action that calculates the data for the report. The biggest benefit of Azure Functions are the different kinds of triggers provided:

  • Http Trigger: Users of our customer can trigger a recalculation by merely opening a Url in the browser. That is, if the report is not up-to-date enough users can retrigger the calculations themselves.
  • Timer Trigger: We scheduled the function to run once a night. This is the time where the most resources will be available and nobody will be affected if the function needs some minutes more to calculate the results.

The fact that the actual call to the time cockpit function is encapsulated in a class library has one great advantage: The library can be reused in e.g. a console app to test and debug the execution. Thus, it also can be used in an on-premise scheduled task or a classic Azure WebJob.

Conclusion

This example we demonstrated an alternative solution architecture for generating time cockpit SSRS reports. Recapped, the main benefits are:

  • The time it takes to open the report in time cockpit largely remains constant regardless of the aggregation level or the amount of data calculated. This is because the data is pre-generated at night.
  • The architecture allows other systems to access data of time cockpit and benefit from domain logic implemented in time cockpit. The data becomes reusable for other software systems.
  • The architecture allows you to trigger the calculations in different ways (Http, Periodic, Webhook…). 
  • Since the calculations are still done by a time cockpit function the algorithms that perform the calculations can be updated without affecting the Azure functions that is, concerns are cleanly separated.

If you have similar challenges just let us know at support@timecockpit.com. We are happy to discuss the proposed solution with you in more detail.

comments powered by Disqus