We are proud to announce that Algospark has joined Microsoft’s AI Inner Circle Partner program. The program includes specialists that are able to provide custom services or enhanced AI product solutions using Microsoft AI technologies.
Algospark works with numerous delivery partners to deliver the best solutions for our clients. We are excited to be working with, and to be recognised as an AI specialist by one of the industry leading companies in AI technology development.
Congratulations to the visionary Legal Team at Pernod Ricard Global Travel Retail for their “highly commended” award at the British Legal Awards 2019! Algospark are proud to have designed and delivered the enabling AI technology and service.
The “AI Approve Tool” reviews advertising image compliance with laws and corporate standards. It delivers multiple rule checks across multiple countries to generate an approve, refer or reject decision. This ensures decision consistency and significantly faster process time.
The web based solution brings together a suite of translation, natural language processing, computer vision and classification algorithms. It has been built to scale using a micro-services design with containers to facilitate a fast production ready AI service.
As part of the launch of “Future Decoded 2019”, Microsoft UK have released this report, Accelerating Competitive Advantage with AI. It contains tips on successful approaches to applied AI, survey results from UK enterprises and use cases. It also reiterates key messages that we use at Algospark:
AI is beyond data, it is part of a business transformation process.
AI is key to competitive advantage.
AI needs change management to ensure success.
Happy reading! Get in touch with us to discuss how we can help you accelerate innovation using applied AI!
There has never been a better time to explore new innovation opportunities using analytics and applied AI solutions! How do you get started? At Algospark we use a portfolio of existing solutions and fast prototyping to minimise time to solution discovery. It is OK to aim for 80% of the solution. It can be iterated later. If you’ve launched at 99% accuracy, you’ve launched too late.
Another method to accelerate the time to solution is to develop in a sand pit. These are also called innovation spaces, test beds or labs. It means using an environment that does not need to fully integrate with existing IT systems from the beginning. Using prototypes in these environments helps quickly migrate to full solutions further down the road.
We also use a modular approach, also called a “micro-services perspective”, when we build solutions. This means that each component in the solution is added with minimal dependency on other components. It makes the solution much more flexible and easier to align with other systems and processes later in the development cycle.
The development methodology should be agile and iterative. It should also have a DevOps perspective, ie quick to update and deploy, and easy to manage. It should also have a DataOps perspective. This is a set of processes used to improve quality and reduce the cycle time of data analytics.
Throughout the solution building process it is critical to ensure end user adoption. This is where applied data science meets business transformation. New process and services always need to be in context of user adoption. They also need to be developed within a strong business transformation and change management framework.
Contact Algospark! We have a portfolio of solutions, frameworks and existing products that minimise development risk. We use tried and tested methods of quickly realising value. Speak with us about how we can help you.
“A significant constraint facing firms looking to grow their analytics
capabilities is their ability to acquire the right talent. After identifying what talent and skills are needed, businesses are then finding
that these skills are in short supply.”
“With demand outstripping supply, individual businesses are faced with the challenge of how to attract highly sought-after talent. In many cases, although the skills being sought after are highly specialised, often they are sector agnostic. As a result, firms across sectors are fishing in the same pool for talent.”
“In a competitive market where analytics skills can command high salaries, companies struggle to compete with the world’s leading digital companies. Companies whose brands aren’t known for providing exciting opportunities for employees to use their data and analytical skills find it hard to reach their target audience. ”
Speak with us at Algospark to discuss how we can help fast track your innovation initiatives using applied analytics and artificial intelligence solutions.
Want to reduce complaints, focus on customer priorities and reduce churn? You can do this by using Natural Language Processing (NLP) and customer profiling. We convert text to metrics, and then to emotions. This helps quantify what is important, and then allows tracking of impact to ensure the best reactions to customer feedback.
There is no point listening if there is no action! Algospark build tailored frameworks for feedback that track, aggregate and map insights. Verbatim feedback quantification is a key foundation to building strong customer relations. See a basic example here
We are predicting that Arsenal win the Arsenal v Chelsea fixture this weekend. We use an ensemble of three AI models to predict home, draw or win results. Our models use player data, manager data and 5 year historic fixtures data to determine most likely outcomes.
We calculate match outcome probabilities, decimal odds implied from these probabilities and how the predictions performed last week. The matches are ranked in order of confidence of the prediction (from high to low).
Don’t hesitate to get in touch to discuss further. email@example.com
DISCLAIMER: These predictions are guidelines only, Algospark takes no responsbility for the accuracy of information or any losses incurred resulting from decisions are actions taken using these predictions.
Data is everywhere, the insights are exciting and it can power the next generation of systems investment. This is the accepted wisdom, but how do you get started? Unless you work in a technology led business, IT teams rarely lead business innovation, yet they are typically the key gatekeeper to unlocking the power of organizational data. The evolution of IT to cloud, dev-ops, micro-services and containers should be keeping the IT team busy. This is even before considering master data, data lakes, governance and moving away from traditional Extract Transfer Load (ETL) procedures.
Exploiting new revenue opportunities and cost savings from data needs to be driven using a business transformation lens. The priorities and needs of the business balanced against speed and risks of implementation are critical success factors for any data science initiative. This is difficult for most people to conceptualize and is why rapid prototyping in “AI Innovation Hubs” is an excellent way to demonstrate concepts and likely benefits. Seeing the results of a prototype along a business case and agile implementation plan is excellent way to rally key stakeholders to further develop and launch the initiative. This should be done outside existing IT and data architecture, but mapped into how it can be “productionized” as part of the plan.
AI Innovation Hubs are key to kick starting new and exciting applied data science projects. Working within the confines of existing data insight normally means working within processes and parameters of existing IT. So it is much better to work outside existing frameworks, but co-developed with data insight teams in the AI Innovation Hub. This ensures:
Innovative new projects help uplift and augment current procedures.
Upskill priorities are easily identified and implemented with exciting hands on training initiatives for analysts.
Learnings from prototypes can drive incremental changes to wider data engineering and approaches to ELT.
Overall, these benefits will lead to wider organisation efficiencies from faster access to more relevant data using less processing time and less analyst time. This ultimately results in significantly raising efficacy from insight, and substantially lowering costs.
Want to get started with an Innovation Hub? Get in touch!
Looking to take your cycle to work? Buying a new house? Worried about your walk to work? Take a look at the Algospark crime predictor that pulls together the crime data patterns from the last 3 years in the UK to predict crime hot spots.
Algospark Crime Predictor has been designed to predict crime by type and by postcode sector using AI. Not only does it help with better crime prevention planning, it also feeds investment decision frameworks to improve outcomes regarding new properties and new business locations.
The new age of computing and data has unleashed a whole new approach to scientific discovery. I was lucky enough to spend a few hours at CogX18 and listen to Zavain Dar (https://twitter.com/zavaindar) at Lux Capital explain how thinkers of old can move away from the basis of ground truths towards a data driven relationship, free interpretation of how things link together. There is no need to understand the ground truths, nor the myriad of supporting hypotheses and assumptions in order to prove a concept. It is now possible to apply cognition beyond human understanding to map input X to outcome Y using machine learning / deep learning / cognitive computing.
Pick an historic scientific legend, eg Newton. Imagine the steps, process, ground truth understanding and empirical proofs required to explain gravity! Now imagine taking readings, mapping to outcomes and feeding the results into a deep learning model. With enough iterations, the core concepts and relationships will be implicitly mapped.
In medicine, for example in the field of dermatology, applied machine learning is already yielding diagnoses that are better than those obtained from human specialists. This is saving lives and reducing the amount of specialist input for critical diagnoses.
New radical empiricism is here! It’s real and can be applied to a wide range of fields. However, it is especially relevant with use cases in which ground truths are notoriously difficult to prove. Healthcare and complex system outcome prediction are perfect. This also implies that economics and investment management with their myriad of nested assumptions are also front and centre of the new radical empiricism wave.
Algospark are developing predictive analytics solutions across equity investment, retail location investment and complex systems prediction. These are front and centre for applied new radical empiricism. NRE is nascent, but a great concept and one that will receive increasing focus in coming months and years.
Algospark has just released Crime Explorer which is an interactive crime exploration tool for the UK. Developed for use on desktops, it is a visualisation tool that shows reported crime data by type and location across the UK during 2017.
Knowing and understanding crime patterns is invaluable for location analytics and to support investment decisions into new areas and locations. Data used by Crime Explorer is from data.police.uk and categorized by month, location and type of crime. The data is rich and can be quickly interpreted and compliments the suite of predictive analytics tools of Location Spark.
The demo version includes data on demography, house prices and concentration of eateries. It is an interactive tool that provides data at the national level and pop-up data at the postcode level across the UK. It has been developed for use on larger screens and for presenting snapshots to compliment wider location analytics projects.
Map Explorer is a great compliment to Location Spark which is a mobile centric, sales pattern prediction tool for any given postcode.
Location Spark is a new service from Algospark. Location Spark provides location analytics for investment decisions in new retail sites. It is a flexible framework that has been co-developed with fast growing UK retail networks. Location Spark brings together a multitude of location data sources, operational metrics and artificial intelligence to predict sales, the type of store and trading patterns.
Reduced analysis time (20-40%).
Increased speed to decision using robust and repeatable process.
Increased forecasting accuracy and minimised probability of poor new site selection.
Rapid automated site evaluation at low cost with great “return on analytics”.
Proactive account management means that events and opportunities are predicted so that customer engagement and service offerings can be optimised.
When implementing a proactive approach, questions your account management team should be considering:
Which accounts are most likely to leave?
Which customers have a very high probability of buying additional products?
What products should I recommend to potential leavers?
It’s worth remembering that all the problems do not need answering at once. It is not necessary to launch a programme with large investment project teams, CRM and IT infrastructure. At Algospark, we generate insights from data and then develop rapid prototypes to fast track value from artificial intelligence solutions.
How does this approach work with the proactive account management questions?
Which customers are most likely to leave? This link shows an example of a churn management solution uses a Value at Risk (VaR) approach to prioritise client contact.
Which customers have a very high probability of buying additional products? This link shows an example of customer centric product recommendation. It uses a hybrid collaborative filtering approach to determine the products with the highest chance of purchase.
What products should I recommend to potential leavers? The solutions above can be combined so that customer management teams have a script that is highly personalised to the client in terms of behaviour and preferences.
Deep learning networks are infamous for their ability to detect cats in images. Advances in computer vision and the application of Convolutional Neural Networks (CNN’s) have yielded exciting advances in image classification and computer vision applications. CNN’s are used to classify images and identify the objects that are in them. They essentially translate pixels values to information about what is in the image. There are often many layers between pixel values and outcomes. The layers in these networks can be used to determine the style of an image. Early layers tend to identify lines or colours, whereas later layers identify more complex objects and derivations.
Combining data that has been generated from 2 images that have passed through CNN’s allows a principal content image to be mixed with style from another image. Content and style are weighted, and the algorithm iterates through numerous passes of the images to align the images. The style of an image is derived from comparing convolutional channels filters and the correlation between them to produce gram matrices. Further details on the approach and specification can be found here.
We have been experimenting with various content images and style images over the 2017 holiday period. Although the commercial value of such image generation is difficult to quantify (as with traditional art), the neural style transfer approach allows AI to generate amazing new vivid images by combining a “content” image and a “style” image. We have posted various examples to the Algospark Neural Style Transfer Art gallery. These can be found here:
How can you offer great service without an exploding product list? Meeting an increasing number of customer needs from a growing list of customers can lead to exponential growth in product offerings. Do you really want to be the one stop shop for everybody for everything?
Most organisations follow the 80:20 product rule. This means that 80% of customers buy 20% of the product offerings. Products that are not in the top 20% make up part of the “product long tail”. Whenever there is an efficiency drive, these products typically appear in the cost saving table of a PowerPoint presentation. But these products have been developed to meet customer requirements, and are nearly always part of a portfolio of products that customers buy. How can product investment or divestment opportunities be made for specific products without jeopardising customer relationships? How should the product tail be cut? Or more importantly, what new products should I recommend to customers? The answer is learn from supermarkets and shopping baskets.
Market basket analytics and product graph analytics are excellent ways to determine “hero products” and the dependencies with other products. These type of analytics measure products by their support (% of transactions in which the product appears), confidence (probability of buying product X if you also buy product Y) and lift (strength of product inter-relationships). Product portfolio dashboards are an excellent way to visualise these metrics. They allow fast understanding of key product relationships that make it easy to determine core product clusters and the most important product associations. This can then be linked to evaluation of product financials (ie sell products that make money) and development of recommender systems (suggest products that customers want).
So using a product portfolio analytics tool will help keep product development in line with demand patterns. It also helps guide customers to more consistent product portfolios without “exploding” the product list.
Do you write a blog? How does it fit with your marketing and content strategy? How does your blog impact new traffic that visits your site? OK, enough questions. At Algospark, we were interested in a fast prototype to assess web traffic and how the blog is driving interest. We pulled together blog scraping, Google Analytics, predictive analytics and rapid dashboard prototyping to assess what is going on with the Algospark blog. As usual data, analytics and prediction are at the core of our interest. Having a better understanding of our content mix and traffic impact should help improve this blog. Read more about the concept here: https:\\algospark.com\#ideation
This is a simplistic first step, but gives great insight into the content mix and how it drives traffic. The application is predicting an 8% uplift in traffic over the next 4 weeks from this article. You can see how the impact evolves, our traffic dynamics and the updated forecast here.
Location selection is key to offline business growth. A large amount of resource is usually involved with site screening, location visits, analysis, prediction and investment review. Algospark Location Analytics has built a framework to expedite the process, make the approach more consistent and reduce the amount of time spend screening and analyzing.
Site location involves numerous factors including: economic, demographic, size, customer experience, competitor and proximity outlet considerations. These factors often have complex interactions. And this is where a machine learning framework can help.
Getting the most from location analytics involves taking into account the insights from existing locations. This can be augmented by taking a cluster approach to locations. The value from machine learning comes from using a consistent approach that takes into account multiple location variables for inference and boils them down into a go / no- decision, supported by a projected sales forecast and profitability metric. Pulling all the factors together into a consistent framework avoids the painful work on multiple table and factor comparison.
The outputs from this quantitative site evaluation should then be used in conjunction with qualitative overlays such as site visits and site traffic analysis. Our approach to location analytics saves time and ensures consistent decision making to site selection. It also provides more accurate new site sales projections and trading patterns from the outset.
See how we make the process easier and reduce the risk of a failed new location on the links below.
Artificial Intelligence (AI) does not replace analyst roles. It adds to capabilities of analysts, and productivity of the wider team.
Machines need to learn. Like humans, they are not built knowing, but need to be trained. The key advantage of machine learning is performing repetitive tasks with high accuracy across huge swathes of information. Machine learning insights always have room for error, and so need oversight. This is also known as “humans in the loop”. This has been around for a long time, for example pilots, and auto-pilots. It will also be around for a long time to come.
Good analysts need creative thinking and need to be able to contextualize. Machines do not innovate, they process and evolve. This means that the role of analysts has shifted and will continue to shift. Rather than focus on tasks in which machines excel, analysts can get to the next level using machines. This is also known as expert automation.
As machines train and learn, analysts need to do the same. The opportunities for analysts has increased, and so has the rate of learning. So as machines train, analysts must also train. With the growth in online learning, this has never been easier. If you are not keeping up with analytics skills, other analysts undoubtedly are. This means that other analysts will be taking your job, not the machines!
Sophisticated modelling can seem daunting for many organisations wanting to become data centric. Do you have huge databases full of data with no duplication, no data gaps and all in the same format? Is the data tagged, have clear ownership, access rules and update rules? It is linked to other relevant sources and mapped to the core business domains and processes? If you have this, great. If you don’t, you are in the majority.
Leading edge AI and machine learning models need quality data that are linked to business outcomes and have labels. This allows algorithms to “train” faster, then “learn” and “evolve”. So quality data drives quality insights, but requires tailoring to needs. In other words, data needs quality controlling and labels attached that link to outcomes and business processes.
Labeling and linking creates data assets that drive value. This means that your differentiation is your data. High volumes of good quality data are the foundation of analytics, insights and artificial intelligence. Your data can always be linked to third party data, but in essence, the more quality proprietary data that can be harvested, the larger the potential data advantage.
The analogy that “data is the oil of the 21st century” applies to data quality. Low grade data (and oil) are expensive to mine and process, offering limited value relative to high grade offerings.
So large volumes of data (a data lake) is great, but your data story needs to map with user journeys and business processes. This is where business transformation meets data science. Business transformation involves mapping out where you want to be (the future state) relative to where you are now (the current state) to determine what to do and where to begin. This could be either an opportunity to save on processes or some new product ideas. The scoping and prioritising activity is a key phase that will inform what data is needed, when and what for.
So the recipe for success is to know what, why and when data is needed. Then ensure quality data is available with the correct labeling (“meta-data”) and outcomes mapping. In summary, ensure data quality and robust operations are in place before letting the algorithms loose. After all, your differentiation is your data. Garbage in = garbage out.
Steps to differentiating yourself with data:
Define business areas with biggest potential for value
Start small on data mining, but develop core quality data sources and data processes
Design and build early stage algorithms knowing that they will initially be early phase prototypes