Crime Pays!

Algospark has just released Crime Explorer which is an interactive crime exploration tool for the UK. Developed for use on desktops, it is a visualisation tool that shows reported crime data by type and location across the UK during 2017.

Knowing and understanding crime patterns is invaluable for location analytics and to support investment decisions into new areas and locations. Data used by Crime Explorer is from and categorized by month, location and type of crime. The data is rich and can be quickly interpreted and compliments the suite of predictive analytics tools of Location Spark.

Crime data + location analytics = value.

Map Explorer for Location Analytics

Map Explorer is a free visualisation tool for UK wide views of key location metrics.

The demo version includes data on demography, house prices and concentration of eateries. It is an interactive tool that provides data at the national level and pop-up data at the postcode level across the UK. It has been developed for use on larger screens and for presenting snapshots to compliment wider location analytics projects.

Map Explorer is a great compliment to Location Spark which is a mobile centric, sales pattern prediction tool for any given postcode.

Data in the demo version of Map Explorer:

  • Demographics, classifications by postcode based on ONS Census Open Area Classification.
  • House Prices, adjusted average transaction price HM Land Registry Price Paid Data
  • Eateries, count of restaurants, food outlets, coffee shops and bars. Based on adjusted OpenStreetMap data.

Enjoy the exploration, and get in touch for more information on sources, methodology and customisation requirements.

Launching Location Spark

Location Spark is a new service from Algospark. Location Spark provides location analytics for investment decisions in new retail sites. It is a flexible framework that has been co-developed with fast growing UK retail networks. Location Spark brings together a multitude of location data sources, operational metrics and artificial intelligence to predict sales, the type of store and trading patterns.

Key benefits:

  • Reduced analysis time (20-40%).
  • Increased speed to decision using robust and repeatable process.
  • Increased forecasting accuracy and minimised probability of poor new site selection.
  • Rapid automated site evaluation at low cost with great “return on analytics”.

Read more about Location Spark here:

Try the demo:

Proactive Account Management with Churn Prediction & Recommenders

Proactive account management means that events and opportunities are predicted so that customer engagement and service offerings can be optimised.

When implementing a proactive approach, questions your account management team should be considering:

  • Which accounts are most likely to leave?
  • Which customers have a very high probability of buying additional products?
  • What products should I recommend to potential leavers?

It’s worth remembering that all the problems do not need answering at once. It is not necessary to launch a programme with large investment project teams, CRM and IT infrastructure. At Algospark, we generate insights from data and then develop rapid prototypes to fast track value from artificial intelligence solutions.

How does this approach work with the proactive account management questions?

  • Which customers are most likely to leave? This link shows an example of a churn management solution uses a Value at Risk (VaR) approach to prioritise client contact.
  • Which customers have a very high probability of buying additional products? This link shows an example of customer centric product recommendation. It uses a hybrid collaborative filtering approach to determine the products with the highest chance of purchase.
  • What products should I recommend to potential leavers? The solutions above can be combined so that customer management teams have a script that is highly personalised to the client in terms of behaviour and preferences.

The links above and other sales, product and process solutions can be found here:

Getting proactive does not need to be difficult! Get in touch to discuss further.


Deep Neural Nets and Amazing New Image Generation

Deep learning networks are infamous for their ability to detect cats in images. Advances in computer vision and the application of Convolutional Neural Networks (CNN’s) have yielded exciting advances in image classification and computer vision applications. CNN’s are used to classify images and identify the objects that are in them. They essentially translate pixels values to information about what is in the image. There are often many layers between pixel values and outcomes. The layers in these networks can be used to determine the style of an image. Early layers tend to identify lines or colours, whereas later layers identify more complex objects and derivations.

Combining data that has been generated from 2 images that have passed through CNN’s allows a principal content image to be mixed with style from another image. Content and style are weighted, and the algorithm iterates through numerous passes of the images to align the images. The style of an image is derived from comparing convolutional channels filters and the correlation between them to produce gram matrices. Further details on the approach and specification can be found here.

We have been experimenting with various content images and style images over the 2017 holiday period. Although the commercial value of such image generation is difficult to quantify (as with traditional art), the neural style transfer approach allows AI to generate amazing new vivid images by combining a “content” image and a “style” image. We have posted various examples to the Algospark Neural Style Transfer Art gallery. These can be found here:

The implication is that an already large image library of content and style can be combined using AI to generate exciting new computer art derived art libraries.

Great service without the exploding product list

How can you offer great service without an exploding product list? Meeting an increasing number of customer needs from a growing list of customers can lead to exponential growth in product offerings. Do you really want to be the one stop shop for everybody for everything?

Most organisations follow the 80:20 product rule. This means that 80% of customers buy 20% of the product offerings. Products that are not in the top 20% make up part of the “product long tail”. Whenever there is an efficiency drive, these products typically appear in the cost saving table of a PowerPoint presentation. But these products have been developed to meet customer requirements, and are nearly always part of a portfolio of products that customers buy. How can product investment or divestment opportunities be made for specific products without jeopardising customer relationships? How should the product tail be cut? Or more importantly, what new products should I recommend to customers? The answer is learn from supermarkets and shopping baskets.

Market basket analytics and product graph analytics are excellent ways to determine “hero products” and the dependencies with other products. These type of analytics measure products by their support (% of transactions in which the product appears), confidence (probability of buying product X if you also buy  product Y) and lift (strength of product inter-relationships). Product portfolio dashboards are an excellent way to visualise these metrics. They allow fast understanding of key product relationships that make it easy to determine core product clusters and the most important product associations. This can then be linked to evaluation of product financials (ie sell products that make money) and development of recommender systems (suggest products that customers want).

So using a product portfolio analytics tool will help keep product development in line with demand patterns. It also helps guide customers to more consistent product portfolios without “exploding” the product list.

See an example of an Algospark product portfolio dashboard here:

This forms part of a suite of optimisation tools for sales, product and process. Further details are here:

Blogging, scraping, Google Analytics and traffic impact

Do you write a blog? How does it fit with your marketing and content strategy? How does your blog impact new traffic that visits your site? OK, enough questions. At Algospark, we were interested in a fast prototype to assess web traffic and how the blog is driving interest. We pulled together blog scraping, Google Analytics, predictive analytics and rapid dashboard prototyping to assess what is going on with the Algospark blog. As usual data, analytics and prediction are at the core of our interest. Having a better understanding of our content mix and traffic impact should help improve this blog. Read more about the concept here: https:\\\#ideation

This is a simplistic first step, but gives great insight into the content mix and how it drives traffic. The application is predicting an 8% uplift in traffic over the next 4 weeks from this article. You can see how the impact evolves, our traffic dynamics and the updated forecast here.

Here’s to our evolving and improving blog posts!

Machine learning for new location selection

Location selection is key to offline business growth. A large amount of resource is usually involved with site screening, location visits, analysis, prediction and investment review. Algospark Location Analytics has built a framework to expedite the process, make the approach more consistent and reduce the amount of time spend screening and analyzing.

Site location involves numerous factors including: economic, demographic, size, customer experience, competitor and proximity outlet considerations. These factors often have complex interactions. And this is where a machine learning framework can help.

Getting the most from location analytics involves taking into account the insights from existing locations. This can be augmented by taking a cluster approach to locations. The value from machine learning comes from using a consistent approach that takes into account multiple location variables for inference and boils them down into a go / no- decision, supported by a projected sales forecast and profitability metric. Pulling all the factors together into a consistent framework avoids the painful work on multiple table and factor comparison.

The outputs from this quantitative site evaluation should then be used in conjunction with qualitative overlays such as site visits and site traffic analysis. Our approach to location analytics saves time and ensures consistent decision making to site selection. It also provides more accurate new site sales projections and trading patterns from the outset.

See how we make the process easier and reduce the risk of a failed new location on the links below.

Machines are not taking your analyst job!

Artificial Intelligence (AI) does not replace analyst roles. It adds to capabilities of analysts, and productivity of the wider team.

Machines need to learn. Like humans, they are not built knowing, but need to be trained. The key advantage of machine learning is performing repetitive tasks with high accuracy across huge swathes of information. Machine learning insights always have room for error, and so need oversight. This is also known as “humans in the loop”. This has been around for a long time, for example pilots, and auto-pilots. It will also be around for a long time to come.

Good analysts need creative thinking and need to be able to contextualize. Machines do not innovate, they process and evolve. This means that the role of analysts has shifted and will continue to shift. Rather than focus on tasks in which machines excel, analysts can get to the next level using machines. This is also known as expert automation.

As machines train and learn, analysts need to do the same. The opportunities for analysts has increased, and so has the rate of learning. So as machines train, analysts must also train. With the growth in online learning, this has never been easier. If you are not keeping up with analytics skills, other analysts undoubtedly are. This means that other analysts will be taking your job, not the machines!

Your differentiation is your data

Sophisticated modelling can seem daunting for many organisations wanting to become data centric. Do you have huge databases full of data with no duplication, no data gaps and all in the same format? Is the data tagged, have clear ownership, access rules and update rules? It is linked to other relevant sources and mapped to the core business domains and processes? If you have this, great. If you don’t, you are in the majority.

Leading edge AI and machine learning models need quality data that are linked to business outcomes and have labels. This allows algorithms to “train” faster, then “learn” and “evolve”. So quality data drives quality insights, but requires tailoring to needs. In other words, data needs quality controlling and labels attached that link to outcomes and business processes.

Labeling and linking creates data assets that drive value. This means that your differentiation is your data. High volumes of good quality data are the foundation of analytics, insights and artificial intelligence. Your data can always be linked to third party data, but in essence, the more quality proprietary data that can be harvested, the larger the potential data advantage.

The analogy that “data is the oil of the 21st century” applies to data quality. Low grade data (and oil) are expensive to mine and process, offering limited value relative to high grade offerings.

So large volumes of data (a data lake) is great, but your data story needs to map with user journeys and business processes. This is where business transformation meets data science. Business transformation involves mapping out where you want to be (the future state) relative to where you are now (the current state) to determine what to do and where to begin. This could be either an opportunity to save on processes or some new product ideas. The scoping and prioritising activity is a key phase that will inform what data is needed, when and what for.

So the recipe for success is to know what, why and when data is needed. Then ensure quality data is available with the correct labeling (“meta-data”) and outcomes mapping. In summary, ensure data quality and robust operations are in place before letting the algorithms loose. After all, your differentiation is your data. Garbage in = garbage out.

Steps to differentiating yourself with data:

  • Define business areas with biggest potential for value
  • Start small on data mining, but develop core quality data sources and data processes
  • Design and build early stage algorithms knowing that they will initially be early phase prototypes
  • Iterate and operate
  • Expand and succeed

Darren Wilkinson