David Bitton: Great. It’s seven o’clock. Perfect. All right, so welcome everyone. We hear exciting stories from many of our customers once they start using Octopai’s automated BI intelligence platform, but FCSA story tops them all and we’re very excited to have them here today with us and to share their story. What I’d like to do just click onto the next slide if you can, Andrew, and let’s introduce ourselves.
Andrew Stewardson: Well, good morning, good afternoon, good evening from wherever you’re at. My name is Andrew Stewardson. I am a data risk manager in our risk management group at Farm Credit Services of America.
Kelly Jenkins: I’m Kelly Jenkins. Good morning, afternoon again to all of us. I’m a lead financial systems analyst at Fram Credit Services of America.
David: Great. Thank you, Kelly. Thank you, Andrew, for helping us with this and myself as you can see here, David. I’ll be your host for today. At this point, I’m going to hand it over to Andrew to explain the amazing things that they’re doing with Octopai.
Andrew: Thank you, David. I really appreciate the opportunity to partner in this webinar, and thanks to everyone that joined today and to the Octopai team for being such a great partner in our data evolution. To get us started, I do want to share some context and some background of our organization. Farm Credit Services of America is a financial services cooperative that provides lending and insurance products to a wide range of arg producers.
One of the unique differentiators between our organization and system and other lending institutions is that we are 100% wholly owned by the customers that we serve, which means at the end of any given year, based on our market performance, we provide a patronage or really a dividend fund back to our borrowers. This is a benefit that we are certainly very proud of and have been proud of for many years. We look forward to providing that to our customers each year.
Our territory spans approximately four and a third states which include Nebraska, Iowa, Wyoming, South Dakota and parts of Kansas. We are part of a larger nationwide Farm Credit system which is overseen and governed by the Farm Credit Administration or commonly referred to as FCA. This visual here that I have on the screen, specifically that green oval, represents the oversight that’s provided by the United States Congress.
More importantly, however, this depicts the sequence of the funding cycle that occurs between our borrowers, the funding banks that support us, as well as all of our United States and global investors. You’ll notice some statistics there at the bottom. I won’t go through all of those. They really just help explain our footprint as a system.
The one that I do want to highlight there is that our system accounts for approximately 40% of all US farming business debt which is pretty significant from my perspective given that I don’t have a farming or agriculture background. I’m a data person, so to know that we are that expanded across the United States and Puerto Rico is pretty amazing.
Moving on and speaking more in summary about how we leverage data in our industry and organization. I’ve outlined some of our future and current use cases just for reference. I won’t go into tremendous detail on all of these, but as with most financial-centered organizations, we use high volumes of data to design risk models which allows us to create the appropriate level of automation and risk management. We provide in-depth market and business intelligence to our internal teammates. Because we are mandated by the United States Government, we have an obligation to provide scheduled reporting to our governing bodies and regulators.
Something more recent though, the organization has started to provide our customers with a benchmark snapshot. This really allows our customers to better understand the financial state and health of their operation. We are providing, basically, customers with prepackaged business intelligence so that they can make more well-informed, and of course, data-based decisions. As we look into the future, we will continue to build out a more consistent Customer 360 to understand how to best serve their financial needs and provide them with more dynamic financial and economic insights.
We’ll continue to launch new digital products, expanding our digital presence using data, of course, to create new channels for those interactions. Naturally, in the era that we’re all in right now with data privacy, being able to support the protection of their sensitive information is of the utmost importance to us. Something that is a passion of mine, personally, is business intelligence.
We will aim to reduce our internal market and business intelligence cycle times from weeks and days to, hopefully, hours and minutes, giving us the ability to create much more data speed and flexibility and then overall accessibility to our data assets. To speak specifically about how we’re using Octopai, and before I turn this over to Kelly to share her experience, I would like to mention how we’re driving business value from a data-lineage solution.
Now, this list is not by any means exhaustive but does provide some perspective for how we create the right approaches for change management and value creation. It should be noted that it is extremely important to us that we’re able to expedite and automate our data lineage. The items listed here are but just a handful of where we see that immediate returnable and measurable value.
First there, EDW research. Our BI team is asked quite frequently how fields and EDW are calculated or where they are sourced from, and typically this takes a developer’s time and effort to open up configuration files or source code and manually scan through blocks of that code to identify source mappings. As you can probably appreciate or perhaps empathize with this is fairly time-intensive. Data quality research is another value that we get, and many times these data quality issues typically are time-sensitive and many times critical.
I’d ask you to think about your experiences you’ve had where you have to chase different people down to help research what is causing a data lineage, or pardon me, a data issue, maybe where it is happening, where else it may be occurring and so on. We use data lineage to concisely locate and target that data issue for those remediation efforts. Gone are the days for us that we just ask multiple questions to multiple people and we fix it in one place and then we have to fix it or replicate that fix elsewhere.
We try to target the root of that and fix it at the most upstream source so that it flows naturally through the systems and we get the benefit of proliferated data quality. This is tremendously beneficial in maintaining those tight service level agreements that we aim to have with our users and our different applications.
As our organization modernizes our data ecosystem and we go throw some of these enterprise platform conversions or migrations, whatever you want to call them, it becomes increasingly motivating for us to identify where data is used and how it impacts other systems that we integrate with. Having a well-used analysis or the ability to accelerate our system impact studies, helps developers, business analysts, and other data consumers visually understand where data may elsewhere exist.
By complimenting Octopai as a data lineage solution with a comprehensive data catalog, we get that enriched perspective with what data means, how good is it, and of course, where it’s sourced from. Really, for us, the theme of data lineage and Octopai is research and being able to process that research quite quickly is where we are anticipating seeing more and more value across the organization.
You’ll hear this is Kelly’s experience with Octopai and with how she supplemented her project efforts while also being a newer teammate to the organization. With that, I want to pass it over to Kelly to really share our case study, our story with how we’ve benefitted from Octopai. Kelly.
Kelly: Sorry, I was on mute. Thanks, Andrew. As Andrew mentioned, I still consider myself new “to the organization”. I started with Farm Credit in April of 2019, and I went through my normal assimilation for the first couple of months, and then in the summer of last year, I was placed on a project that had been a ongoing effort. It started with a different team, different individuals on the team, and then it moved over to me. There was an initial phase that happened, and then I was brought on board and I was given the opportunity to use the Octopai tool.
I just wanted to give a little background on my experience with it and my experience with the project. Andrew, if you want to move to the next slide, please. [silence]. Are you frozen?
Andrew: I hope not. Can we see it okay, Kelly?
Kelly: I’m still seeing the old slide. There we go. The project that we worked on, the challenge that we were presented with is we do have a outside partner vendor. They supply us with nightly data feeds of our customer data for a certain product that customer segment leverages and uses. They were actually our vendor. They were doing an application upgrade, so they were making the decision to move to a modern “software application.”
This was going to drive a large-scale conversion project between us and our partner vendor. Our legacy database, it had a long history of ownership. It had changed hands from an owner perspective. Many times over the years there were usage gaps. We had an age-load process. We were getting nightly data feeds from our partner multiple text files, thousands of data elements, one to many relationships in terms of the source files and then their database endpoint.
We really were represented with the challenge of how do we convert and also the opportunity of how do we approach this in terms of do we work to try to remodel the existing database, or based on the fact that we were presented with what we perceived to be large scale impacts, do we streamline the process and reinvent ourselves? Some initial impact analysis that we did with a partner, it did reveal that we would have approximately 500 fields that would be deprecated from our nightly load, as well as additional field impacts.
They were forecasting that fields would be changing in terms of potentially the length of the fields, the type of the fields that were going to be delivered to us nightly, as well as the format of our files was going to change. We were moving from a text approach to an XML approach. We really just looked at this as more of an opportunity than necessarily a challenge to be able to create a new database, have a clean load process, really figure out what in terms of ownership usage was valuable and needed to be reinvented in the new database.
Obviously, in all of this, everything is very data-driven, so data mapping was a huge first step especially to wrap our arms around the amount of changes that we were predicting to occur. Can we move to the next slide, please? [silence]. The initial approach, we’re calling that how we looked at the change before automation or phase one approach. We knew in partnering with our vendor that there was going to be a massive amount of data element changes.
This involved a lot of manual effort in terms of nightmares of spreadsheets at a large scale to figure out what was actually coming in, where it was landing, and potentially how that was being used across the organization. This took multiple systems analysts, a great deal of manual research to figure out how those fields were being mapped within the database and how they were being leveraged out.
It also consisted of quite a bit of broad-scale communication out to who we thought might be a user of that data and should try to understand how that impact of a deprecation or an altered field might change their world. What we were finding is that due to the legacy database having that history of change in ownership, kind of an uncertainty, we were still getting responses of, “Oh, we’re not sure if we use that. We’re not sure how that might impact us.” Then it kind of drove more research. There was more legwork to find who the rightful owner of this data was.
We conducted a lot of that manual research within our internal database. Looking in SQL Server, figuring out where table references were, using tools like Agent Ransack, evaluating SSRS reports to determine where that usage might be, where that ownership is. It was a great deal of effort. It was very manual and it took approximately four months to try to figure out what those deprecation or those alteration impacts might be. At the end of that period, there were still some unanswered questions of, “Do we really use this? Do we really need it?”
In some ways, we just painted in broad strokes that agreement with our partners. We needed to keep this just in case. There was still some gray area in terms of the research that we were doing during that first phase. If you can move to the next slide, please. That was when I was brought on board at that point, is after that first phase had been completed, and then from there looking at how we could make that additional evaluation for the deprecation. Being new to the organization, I felt like I didn’t really know who to go to.
In the initial phase when we had that manual review, those that were working on the project had the advantage of being in the organization prior and having those connections to know who to reach out to. I was in a different boat. I didn’t have exposure to the organization. I didn’t have exposure to the industry. I came from a completely different industry, so I was like, “Oh, what do I do?” My boss had actually said, “Hey, this is a new tool that the organization is looking at, might be helpful to you and your research that you’re doing.”
I actually just did some self-serving on the internet. I signed up for an account. I looked at Octopai’s website. I found some conference presentations that they had done at different conferences on YouTube, and I was off and running. Essentially, what I did at that point was evaluated the next phase of changes that were coming from our vendor. I was able to really self-serve, use the Octopai tool from a discovery and lineage perspective to actually do the advanced work pretty quickly in an automated fashion to figure out if those fields were in use, who might be the owner of them, and what the impact was.
I was able to do it in a one-stop-shop, so instead of looking through all these different tools, like SQL Server, looking through store procedures and old project archives and things like that, I was actually able to drill into the discovery and into the lineage to see all that, to do some fact-finding for myself, and then I was able to take that information and actually target who I determined the owners were.
That research that I was able to do took me about a day, and then what that enabled me to do was really to change the dialogue with my business partners. Instead of me saying, “Hey, we have these fields that they’re going to go away. I think you might be using them. How does that impact you?” It shifted the conversation. I was actually able to say, “Hey, these fields are being deprecated or they’re being altered. I see they’re being used here. Let’s discuss what the impact is to you,” and kind of flipping the conversation.
It was actually me instead of just presenting the business owners potentially with– They had to do the research. I’d already done the research for them, and we were then able to really partner and work together to make some decisions on how we were going to proceed. The discovery tools and the lineage tools in Octopai, they really empowered me as one, a new employee, and two, new to the project to be able to do a lot of that independently and on my own and to be able to have those educated conversations with our business owners.
Most importantly, it actually reduced the research time. Previously, from a resource allocation perspective, there are multiple analysts that were doing all that manual legwork, doing those universal broadcasts to our teammates. I was able to do a lot of that on my own to really target and start those conversations. I was very thankful for the tool because it just helped me as a new employee, as a member of the project to be able to self-serve on my end and then to have those informed conversations and then– Sorry, I lost my place. I think that we’re actually onto the next slide, Andrew. Sorry.
Andrew: There you go. Okay.
Kelly: Then from there, I had teamed up with Andrew on other projects, and he roped me into doing this. He said, “Hey, let’s talk to everybody about your experience.” Here we are today, right? From a benefits perspective, obviously, the time savings. With this project, in particular, there was months-long of multiple resources trying to figure this information out. There were still question marks after that phase, and then here with the Octopai tool, we were able to really condense that research time, and again, have those constructive dialogues.
It’s a user-friendly tool. It was very self-serving. I did not have to have any training and I’m not a data person at all, so I had never used the lineage tool ever previously. I had done some data mapping, but in terms of the formalized tool, that was something that this was brand new to me and I was able to pick it up right away. I don’t know if you want to talk about the BI and analytics benefits, Andrew, or any other benefits from your vantage point?
Andrew: Yes, certainly. First, thank you, Kelly, for sharing that story with everyone. I think that that’s a really important narrative that we are continuing to create as part of our overall beta strategy to get users adapting into these self-service tools. This was a very significant turning point for us, and this is part of the reason that we chose to implement a data lineage solution as one of our first in many of the tools that we are implementing across the organization.
Additionally, to what Kelly had mentioned, minimizing the people dependency. Everything that we are trying to do in regards to our capabilities that we are bringing online is to have this added dimension of self-service. We are far past the point that we need to be dependent on a core data management team to do all the training, to do all the work, to provide all the capabilities, and then to actually effort that. We need people to be able to access these and use them in a very intuitive way.
Of course, Octopai, not to take too much away from what David is going to share, but the features are really useful in that way, and they’re very friendly. To create that visual representation of how data is sourced, where it lands and targets, and then how it’s used across many different systems is really powerful.
To speak a little bit more into the BI team, as I mentioned earlier, we have a lot of EDW research that goes on. Being able to enable the BI and analytics teams to get the whole story is very critical. I mean, I believe, and I’m just quoting based on what I know a little bit arbitrarily, it takes hours, sometimes days for them to open that code and research it. Now they can actually use discovery type in a string to search for and they can find multiple different locations of that through the entire data pipeline. Being able to respond much more quickly is really important for us.
Then, of course, no implementation costs, as Kelly mentioned, in terms of training. We do have some training that is available through our learning management system, but because it is so intuitive if you can use search features and you’re familiar with many different types of web browser applications, it fits really well into that same type of experience that you would have.
With that, I don’t want to talk too much about the product because that’s really where Dave is partnering with us on this webinar. Once again, everybody, thank you for joining and listening to our story. I hope you have some questions for us. With that, I will turn it over to Dave to go into more of a product demo.
David: Sure, thank you very much, Andrew and Kelly. We really thank you for sharing your best practices. I think I can speak for those sister companies that are on the call and other colleagues in the BI industry that they’re appreciative on you sharing that best practice with them. Bear with me. I’m going to share my slides or my screen.
You should be able to my screen. I promise I won’t take too much time, but I’m going to jump into share a few little slides which will give you a little bit of history about who we are, what we do, and how we do it. Then I’ll dive straight into a demo and show you a little bit about what both Kelly and Andrew were speaking about.
Basically, today, the data supply chain, the entire BI and analytic supply chain from the moment the data is collected, processed, stored, touched, change, and all the way until it’s consumed in the form of reports is broken. Because without proper visibility and control, there’s no way that you can actually ensure otherwise. Octopai enables full visibility to the entire data supply chain empowering BI and analytics teams to be able to easily find and understand their data so that they can enable the business to make better decisions based on solid, qualified data.
How do we do it? With Octopai, all of that metadata that is so crucial and to understand and so difficult to collect is actually collected by us and placed into a cross-platform SAS solution automatically. I’m going to say that again. We discover that metadata automatically because it’s an important point to note, and that means that there are no manual processes. There’s no documentation, no prep work, no customizations, or even professional services that are needed.
The metadata once it’s centralized, and then analyzed, and modeled, and parsed, that’s, of course, after it’s been collected by us, then continues to be going through a few different processes, such as being indexed and so on, is then ready for discovery so that you can easily find metadata literally in seconds by clicking the mouse. Octopai reduces the time it would take to do that from weeks to literally seconds and providing you the best, most accurate picture of that metadata.
Not only is Octopai central in the initial setup, collection, cataloging, analysis of that metadata, it’s also essential moving forward so that whenever you need to look for metadata next week, next month, next year, you’ll always be looking at the most current picture at that given point in time, not some spreadsheet that, with all good intentions, was created a couple of years ago and never updated. It will always be the most freshest picture of your metadata at that given point in time.
This slide here will give you a further understanding of the way we work. I’m sure it’s nothing new to all those here on the call, but I will quickly go through. This is a typical example of a BI infrastructure, very common amongst our customers, very common amongst those many of you that are on the call today. What we see here on the left-hand side is a stack of different business applications that are being used by the organization. Now, the different users are going to be entering data in large quantities, business users, HR, finance, and so on.
That data is also going to be required or consumed by them in return. However, they don’t have direct access to it. It’s usually the BI team or the relevant team– It could be called something else in many different organizations, but let’s say it’s the BI team that’s responsible for making that data available to those business users. That’s why then, at any given point in time, they need to know where the data is and then understand its moving process through the various systems that we see here on the screen.
This may be due to reporting errors, impact analysis issues, and various other use cases. Now, because that metadata is actually scattered throughout the landscape, it’s actually very challenging to get a handle on. What our customers are telling us is actually that their teams are spending more than 50% of their time just trying to discover and understand where that metadata is, to understand then its relationships, connections, and data lineage.
Now, in order to overcome these challenges, what we’ve done is we’ve actually leveraged technology using some very powerful algorithms in learning the vast amount of processing power that’s available on the cloud to create a solution that actually extracts that metadata, centralizes it, analyzes it for you, and it makes it available automatically from your various systems. We’re able to do it very simply.
We extract that metadata from the different tools. This is then uploaded to our cloud for analysis, and then within 24 to 48 hours, you get an in-depth picture of the entire landscape. It’s that simple. No major projects or timelines or resources are required to get up and running with Octopai.
All right, so when coming to Octopai, most of our customers are usually facing or mentioning facing these similar challenges, could be all. It could be some of them. The common denominator here is metadata. It’s required everywhere. Our customers are faced with challenges whenever they build a new process, or when they make a change or edit an existing process, an ETL, for example. There may be M&A that come into play in some organizations impact analysis, certainly. That’s how they’re going to be in reports or any ETLs, for example.
When you make a change, either to a field, an ETL, a table, report how do you know what’s going to be impacted down the line before you go into production? Then there’s implementation and maintenance of data glossaries, data dictionaries, data governance platforms, and so on. Then, of course, repairing reporting errors, which is going to be a daily occurrence, very common, a very challenging amongst most of our customers. That’s usually why they’re coming to us because today they’re usually addressing, as Kelly mentioned, manually, and which is taking way too much time.
What to do now is jump into the demo, and show you how we can address these challenges for you. What I’d like to do is demonstrate the power of Octopai with the reference to two specific use cases, which actually will give you the broadest understanding of the capabilities of Octopai, and then as well how it could be applied within your specific use cases within your organization. The first one I’d like to show you, and the reason why I want to show you the first one is because it’s the most common one, and that is regarding errors and report.
What I’d like to do is describe how that would be handled today in most organizations, and then I’d like to show you how that would be handled with Octopai. Imagine there’s a support ticket that’s issued by a business user who’s asking the BI team to look into a problem they have with the report. They just received report. It’s the end of the quarter, and they have to release quarterly earnings. Guess what? The report’s empty, blank, or one of the columns is missing, or one of the data elements is not correct. There could be many, many different scenarios there.
Of course, everybody’s going to get into a little bit of a panic to try to figure it out. Thankfully, we’ll have a way to show you how we can do that with Octopai. Traditionally, though, in most organizations that would be addressing that challenge would involve a lot of manual work. Our customers are telling us that’s just too time-consuming and inefficient to be working this way, certainly, today with all the amount of data that’s coming down the pipe more and more all the time.
Of course, that wouldn’t be the case with Octopai. Now let’s go ahead and see how Octopai would address those challenges for you. What we see here is the dashboard, the Octopai dashboard, and what we can see is that Octopai has actually gone ahead and extracted that metadata from the different systems here. We can see it represented in the dashboard as I mentioned. On the left-hand side, we can see here the ETLs. Yes, I know that SQL Server is not an ETL. However, we do tag the stored procedures as ETLs because they oftentimes load data.
In the middle, we see here the database objects from the multiple databases. Then to the right of that, we see an example of some reports from multiple reporting systems. A typical non-homogeneous environment that we see in most of our customers’ environments. In order to investigate that scenario that we just talked about, an error in report, most organization you need to go through a process similar to this.
Now, the BI team will need to investigate the structure of the reporting system, then everything will need to be mapped, and they probably need to contact a DBA to ask them questions, like, for example, which exact tables and views may have been related to the creation of that report? Then they will also need to look into the fields if they were given the same names. If not, which glossary was used at all if there was.
Now, after investigating all of this, of course, which is going to take some time, our DBA may actually point out there’s actually nothing wrong at this level and everything is perfect, and most likely the error has crept in earlier on at the ETL level. Our team is going to now need to investigate at ETL level, which will be a similar scenario. Of course, it’s going to take more time. Now most of our [unintelligible 00:29:59] or most of our customers are telling us that that on the very simplest level, it may take an hour or two to address that challenge.
If it’s a little bit more complicated, it might take a day or two and even take, if it’s really complicated, a week or two. Now, that’s what most of our customers are telling us. That’s how you would say a fair synopsis with the way that’s handled in most organizations. What I like to do now is show you that exact same scenario within Octopai literally in seconds and automatically. They’re being handled in seconds and automatically.
The problem we’re having with is in a report called customer products. I’m going to type that in, and you’ll notice that Octopai will filter through all of that metadata and show me the report we’re having trouble with. In order to get to the lineage of that report, as we mentioned, it’s not going to take hours, or days, or weeks. Literally, click of the mouse and a second later, and we now see at the high level exactly how the data landed on that report.
Let’s quickly go through what we see here on the screen. To the right, here is the report we’re having trouble with. If we double click on any item on the screen, we can actually get to more information and is actually we move to the left that we can see here that there’s a view here. As I continue to move to the left, we see here that there are an additional two, three tables. Now, continuing on as our DBA told us, we need to take a look at this ETL. We see here that there’s not one ETL, but actually, three different ETLs and they’re all from different systems.
Just give me a second. I will need to refresh my screen, and what I’m trying to point out is that the fact that you may be using a multitude of different systems to manage or move your data is not a challenge for Octopai. What you can see here on the screen is that regardless of the fact that the data went through many different systems in order to get to this report including different ETL, different data warehouse, and then reporting, roughly five different systems were involved. The fact that you may be using and managing multiple systems in order to move your data is not a challenge for Octopai.
Continuing on with our scenario, let’s say we asked our customer what happened with this report and they mention that, actually, they made changes to this one ETL a couple of weeks earlier, most likely that’s why they’re facing production issues today. We asked them, I guess naively, if you’re making changes that ETL, you certainly knew that there would be impacts down the line. Why not investigate it? Why not be proactive? Why not ensure data quality and no production issues and confidence in the data by actually fixing the problem in advance, being proactive?
Now, of course, as we all know, that’s a lot easier said than done. There just could be way too much to look into. There could be hundreds if not thousands, or hundreds of thousands of different ETLs tables, views, fields, reports that could have been affected by any one change, and so because it’s literally almost impossible to be proactive, most organizations are first to work reactively, meaning that, of course, they make these changes. They will try to avoid any production issues.
They’ll use some capabilities at hand, such as experience from the people on the team who’s built this and maybe some outdated spreadsheets. Hopefully, they’re been up to date, or maybe some outdated spreadsheets, maybe a little prayer, finger to the wind, and all of those will most likely 8 or 9 times out of 10 work. Then what they’ll do is they’ll address the 1 or 2 times out of 10 that there are production issues.
Now, the problem with that, though, is since you’re working reactively, you’re only reacting to what you’re aware of, and therein lies the issues with… come up with data quality issues, confidence in the data, and so on. Now, with Octopai, of course, we can enable you to become proactive, more efficient, just understand exactly what will be affected in advance literally at the click of a mouse. Let’s say we need to make changes to this ETL like our customer in this scenario did.
In order to understand what would be impacted, as you saw, I double-clicked on that ETL, and now click on lineage of that ETL, and literally within a second and now I have an exact understanding or well understanding into what would be affected or could be affected if I make changes to that one ETL. What we see here is something quite interesting. We started this entire scenario. The only reason why we started it is because we were made aware of one issue with one report by one business user.
Now that we see the lineage of this ETL, we can see that most likely that was not going to be the only thing affected by the changes to this one ETL. Most likely some, if not all of these objects on the screen, the different tables, views, dimensions, stored procedures, and certainly these reports could have or would have been affected by the one change to that one ETL.
Most likely what will happen as time progresses, these reports will not all get open at the same time by one person. Most likely it going to be open by different people throughout the year and monthly, daily, weekly, annually, semi-annually, and so on. As these reports need to get used, they’ll get open, and hopefully, those users will actually notice the errors. When they do notice those errors, they’ll be support tickets that’ll be issued. The appropriate team, the BI team most likely will now be tasked with trying to figure out what went wrong with those reports.
Since they came in throughout the year by different people, there’s no real way that they could know from the get-go that the root cause is this ETL. They will need to reverse-engineer all of these reports. As we said earlier, it could take days, weeks, or hours. Hopefully, it’ll take hours, but you can imagine throughout the year how many reports or reported issues with reports are reported to the BI team. How many reverse engineered reports they need to go through?
All of that time, of course, is wasted because if they had known from the get-go that this ETL is the root cause of all of those errors, they could have saved all of that time, and of course, put it to different use or more productive use. Now, I left these here on the right-hand side and that is for a specific reason, that is if you’re working reactively, most likely some of the errors and some of the reports will fall through the cracks.
What will happen is the organization will continue to use those reports. They’ll base business decisions on those reports. Meanwhile, little did they know that those reports contain erroneous data. Of course, we can all imagine how impactful that might be depending on, of course, how important the report is. That’s going to be the most impactful out of those two scenarios to the organization.
To continue on and delve deeper, we’ll go all the way down to the column level, so I can either peel apart this ETL by taking a look at the packages containers, or I can go specifically into the different flows, but I’ll show you this way. Double-clicking on that would now take us into the package view, and however, this is a demo environment, so yes, I apologize. There’s only one package. I’m sure in your real environment, you may have a multitude of them and you’ll see them here.
Delving deeper still yet, will take us to the container view, and here we now can see the different flow tasks. From here, I can actually click on any one of these, for example, and see– I believe it’s one that I wanted to show, the actual column to column association, and yes, actually, that’s exactly what I wanted to do. What we see here, if I can now click on any field, I can actually see its entire journey from its source all the way to its target. You can see here from the source all the way to the target.
Now, this source may very well be a target to a previous process. Double-clicking on that will enable us, and we can actually see that it is, to go backwards all the way to the source application and vice versa. For example, if we see here that this target may actually very well be a source to a following process, and we can see that by going forward, so we can actually go forward. What we see is the entire journey a field has taken from the source application all the way to the final target report, including any type of transformation.
It could be that there may be a name change. It could be a language change, a dash, a space, Octopai will still be able to provide you with the lineage regardless of that because we’re actually looking at three different layers, the physical, semantic, and presentation layer, that plus our algorithms and machine learning, we will be able to present you the entire lineage of that field from source all the way to target.
All right, so that was scenario number one. I promised to show you two, and if time permits, I will show you three. The next scenario is where there is a need to look for a specific item within your environment. I think Andrew spoke about looking for a script, for example, or a line of code. There could be many different things that you might be looking for. It could be that you need to find PI information or PII sensitive fields. It could be that you need to make a change to a formula. It could be that you need to change a length of a field, for example, or whatever.
There could be many reasons that I’m sure you could all come up with different scenarios better than me. Now, in most organizations, the way that’s handled today is, again, manually. A Bunch of people in a room, like I think Kelly had mentioned originally before they were using Octopai than trying to get everybody to map everything out, trying to find where it is. Maybe they’re using some spreadsheets. Hopefully, they’re up to date, possibly they’re not. All of that, it’s just a scope. That project is, on its own, a small project. It could take a couple of hours or a couple of days.
Now, of course, once again, with Octopai, we automate that entire process. If you need to look for something specific, you come in here to the discovery tab. For the sake of the demonstration, I’m just going to use a simple word customer. I’m going to show it to you within the entire environment, literally, in a second. I click on the Enter button, Octopai comes back and shows me all of different systems, for example, that were connected to Octopai, such as the different ETLs, and databases, data warehouses, analysis tools, dictionaries, and reporting tools.
Here what we see in the middle of the screen are in green, what actually Octopai has founded and how many times. In gray, we see what Octopai has actually searched for it and telling you it’s there. It might even be more important because saving you time and effort from jumping into that. All of these green objects on the screen, you can actually delve deeper into.
Let’s say we actually wanted to look at the SQL script. Clicking on that will give us more information such as the package path or the connection string, for example, and will also enable us to jump into the script itself and see, in this case, we were looking for the word customer, but again, that could be anything a field, the formula, a line of code. Anything that you might be looking for, we can actually find it here.
Everything within Octopai is exportable to excel. If you need to collaborate with your team members, you can certainly do that by exporting to excel. We also provide API, so you can export everything within Octopai using our API into different applications. We also have direct integration with some of the leading platforms out there in data governance, for example, like Colibra.
What that will do is all of this metadata, not just the metadata, everything that we do to that metadata, the massaging, the analysis, the modeling, everything that we do to that can then be injected to those applications saving you months and months of time of otherwise manual work.
All right. That was scenario number two. I think we have a little bit of time for me to jump into third one, and that will touch on a little bit about the business glossary. All right. Give me one second. Let me see if I have a slide on the business glossary in my presentation. That’s fine. I don’t. I would jump into the demo. Give me just one second and jump into the demo.
All right. Octopai just recently launched its business glossary. Actually, now it’s almost a year now. Before we went into the development of the business glossary, we actually didn’t do it in a vacuum. We went to the business, some of our customers, leading customers, and some of the leading experts, so very respectful people within the business intelligence industry. We asked them if they were going to be building their business glossary from the ground up and if they wanted to include everything, just imagine what you could do with it and basically a dream list, what would it be?
They came back with quite a few demands as you can imagine, a whole laundry list of items, and of course I’m happy to say that we were able to develop most if not all of them. The key features and benefits that they actually wanted us to include within the business glossary were five, and I’ll quickly just go through them off the top of my head. That is number one, they wanted to have all of the different metadata from the reporting system within one business glossary from the semantic, presentation, and physical layers.
They also wanted to have one business glossary for all of the different tools that they might be using, all the reporting tools that they may be using such as Tableau, SSRS, Qlik, for example. They still wanted to have one business glossary there to address all of those. They were adamant that they needed the calculated items from the reporting system also as well because on its own to enter that in and to maintain that is very time consuming which leads us to the next point.
They didn’t want to be buying themselves a project. If you can imagine implementing a business glossary with basically anyone else other than Octopai, is going to be very manual intensive. It will most likely take a team of individuals months on end of manual work trying to enter in all the information, description, tags, the calculated items, source application, so as you can imagine. It will take at least five minutes or so if not longer per field.
Most organizations are managing hundreds if not thousands or even tens of thousands of fields if even not more, so you can quickly do the calculation. Imagine how much effort would be involved in implementing a business glossary. What I’d like to do now is to explain to you how Octopai has actually automated that entire process, which was the final wish list from our customers. I’m going to dive into the business glossary for you now.
All right. The best way, of course, to explain any item is through a use case, and what I’d like to do now is show you the same. You have a business user who’s just received a report they’ve asked you to create. They ask you to simple report you can create to include three items. First name, last name, and tax amount. They wanted it to come from SAP ERP system, and lo and behold when they receive that report it actually had only two fields. It had actually full name and tax amount. They’ve opened a support ticket. They’ve asked you to look into it.
You come and take Octopai and look into the lineage and look up that report. That report I know very uniquely. It’s called sales report. As soon as I typed it in, I can see this report that the business user is complaining about. In order for me to take a high-level look at that report, I just click on now map, and I can see that the business user is actually didn’t have too much wine for lunch. They’re right. It did come in at two fields instead of three, full name and tax amount.
What I need to do is simply just click on full name and I can trace back into the lineage of that field. I can see it’s a concatenation of first and last name or a merger of those two. At this point, I’m starting to feel a little bit bitter that they got what they wanted. I’m still not 100% sure. In order to do that, they need to get a description and get a full description of that field in order to be able to confidently answer them. What I’m going to do is jump into the business glossary and look for that field.
I’m going to type in full. We know that it’s a business objects report in the presentation layer. As soon as I type in “full,” Octopai will come back and in this demo environment show me all of the fields that contain that. We have some here in SSRS which is a report. We have also the physical layer, a presentation layer, semantic layer, and then we have the actual field that we’re looking for. If I just simply click onto it, then I’ll get my answer really quickly. Before I jump in that, I was just trying to give you an example of what it would look like if you’re trying to build your own business glossary.
You most likely would be able to be faced with a login screen like this the first time that you log in, and then you need to– Of course, it would be a plus or minus a little bit different than this depending on what you’d be using, but then you need to have to enter all of that information in field by field, one by one. Then that needs to be maintained once it’s created and so on. We’ve automated that entire process based on our expertise in the capabilities that we have in automating the process of collecting all of that metadata.
Depending on whether you have your description, either somewhere in a spreadsheet or already entered into some reporting system, you can have either little or no effort involved in creating this business glossary. I’ll explain that a little bit more. Let’s just quickly see our answer, our scenario that we had from our business user, and we can answer now confidently that the full name field actually is the …because we can see that as a CONCAT in the calculation of first, middle and last name. It’s also coming from the SAP ERP system.
We now come back and tell the business user that although it’s one field, it actually contains what they’re looking for confidently. Let’s quickly go through what I was mentioning here, the description that we see here on the screen, that needs, of course, needs to be somewhere either in a spreadsheet or in a reporting system or anywhere in order for us to be able to implement or enter that in here for you automatically. If you have that, then literally there’s not much else that needs to be done in order for you to have a fully functioning business glossary.
The rest of the items, I’m going to quickly go through them. These are pulled from the metadata. We automatically pull from the different systems, including, as you can see here the description, the type of item, the path to the– Though in this type of environment it shows a sample path and our production. We actually have a path to all of the reports that contain that field, so very helpful, source application owner, and so on. Now, also very important is the capability of tagging the different fields.
If you have certain fields that are PII and you need to tag them as for GDPR or CCPA or any multitude of the different state privacy regulations that are coming into place or worldwide privacy regulations that are coming into place, you’ll be able to do that. Another example is that you’re working. You’ve been tasked with making some changes to a bunch of fields, for example. You want to tag them for specific project. That gives you a capability of actually gathering them all together for you into one place, so you can actually search for either GDPR by clicking on GDPR.
Oh, just a second. Any demo is never fully finished until we have a technical issue. Not a major one, but let’s just quickly go back through that.
All right. There we go. Quickly go through what I was mentioning is that you can actually click on that tag and search for what it is that you’re looking for, or you can search for it in the filter.
Additionally, a very important, especially if you want your business users to have full capability or full access or full visibility into the different field, whenever it is that they want to be building a report or whatever and anybody wants to build a report, they’ll have everything within– All of the different fields from the different systems and different source applications that in most organizations will be slightly or completely different.
You may have hundreds of different fields with the same data in it, but named differently like you can example see here full name. In this example there is full name with a space, but it could be without a space. It could be FN. It could be in a different language. Octopai based on its algorithms and machine learning is actually going to able to, from the metadata that we pull, we can actually automatically link them for you so you can see here the first name, full name. Last name is actually connected to full name.
It is done automatically, as I mentioned earlier, for you, and that gives you the capability of actually linking the different layers, physical, semantic and the different types, column, dimensions, and so on and gives you basically everything as in one-menu type of solution so that you can actually make an educated choice on what field would be using and for those reports. That was scenario number three. That was everything that I had to show you.
There is just literally one more slide that I wanted to cover up and then we can open up to the questions. What makes Octopai unique is three main benefits that are combined into one powerful solution. This is what our customers are telling us. Number one is that we are a cross-platform. We analyze the metadata across all of your different systems versus the standalone tools that are out there today.
Very easy and simple to implement since it’s cloud-based. There literally is, as I mentioned a couple of times already, only about an hour or two that needs to be entered the first time, or needs to be worked on the first time in order to get up and running. After that, it’s automated. As you saw and I think also, as Andrew’s mentioned, very simple and easy and intuitive. That was what enabled is what Kelly to be able to jump in and start using Octopai with literally not much help or training, although we do provide that.
Literally, with the click of the mouse to search in a toolbar or search bar, you can actually get to anything that you want to. Everybody within the organization that needs to have access to Octopai can become self-sufficient with Octopai literally in, I would say, 30-minute kickoff session that would be probably sufficient.
That was everything that I had to say. I’d now like to open up the questions and see if we have any questions. [silence]. Okay, perfect. I have a question here that I spoke about a little bit and that was, talk a little bit about the resources and the time it would require to implement and manage Octopai.
Normally in most organizations in most instances, it will not take more than an hour or two. If it’s a very complicated landscape, it may take a couple of hours, but we’re talking hours, not days or not months. Basically, all it does is one person having to configure the Octopai client to go to the specific directories within the specific systems that you would like to hook up to Octopai. Once you’ve done that, the process is automated, and that going forward, that’s all that’s required. All right.
That was basically everything that I had. Any other questions? [silence]. Okay, we have another question. One of your slides indicates data effect was still in process. Can you give us a timeline when that will be in place? Great question. I’m happy to say that it’s in development. It should be available, I would imagine, by the end of the year, at the very latest, so just basically, a few more months. [silence]. Okay. Any other questions?
All right then. Once again, Kelly, Andrew, thank you very much for your efforts in helping us share your best practices with, again, your colleagues at your sister companies and within the industry. I have one more question regarding licensing. How does this work? I can answer that, I guess, before we lock up. Octopai’s license is based on the module that you’ll be using and in addition to that, the metadata source applications, meaning ETL, data warehouse, reporting tool.
On average, our customers are paying, I would say, anywhere from $3,000 to $10,000 a month. That includes everything. There are no additional charges other than that. It includes training. It includes all cloud fees, the amount of metadata, the amount of analysis, and so on. We can get, of course, deeper into that if you’d like. Whoever sent that question, we can arrange a 5-10 minutes call. We can explain that in more detail. Actually, that was Timothy. Yes, thank you. Any other questions? Okay. Andrew, thank you very much. Kelly, once again, thank you again very much.
Kelly: Thank you
Andrew: Thank you very much. I appreciate it.
David: Thank you very much to my teammates who made this possible. Thank you everyone for joining. Have a great rest of your day. If you’re on the Eastern side of the United States and if you’re in Europe, have a great evening. Thank you once again and look forward to speaking to all of you in the near future.