Today I will get a bit technical, and take you through the performances of the system.

We managed to test our algorithms over a large number of artificially generated images, more than 5000 images taken over time by a skin cancer clinic and over 2000 smartphone image pairs taken by our 150 beta users. We believe this means we are the most accurate way to detect changes using smartphone.

Before continuing, I have to introduce you two important measures: sensitivity and specificity. You can think as the former as a measure of how good the system is to spot changes. The latter, instead, measures how well it can spot non-changes. Obviously, it is easy to get a very high sensitivity, saying for example that all the examples are changes. But that brings nearly down to zero the specificity, therefore you have to look for a good compromise.

We are proud of the figures we achieved, reaching a very high sensitivity and specificity both for shape and colour assessment. The graphs below show the trend of these metrics.

image

We obviously decided to be more biased towards false positives, diminishing the risk of losing small changes that could potentially be an indicator of early melanoma.

Asking the user to re-take the picture when a change has been detected, helps us increasing those numbers even more, reaching at least 95% for each metric.

Another way to visualize the performances is to plot the so-called ROC curve, and measure the area under that (AUC). This curve is created plotting the sensitivity against the inverse of the specificity: so the biggest the area, the better the accuracy.

image

We obtained a value of approximately 0.92 for shape and 0.94 for colour. In literature, a medical study is considered to be “worthy of further research” if this area is greater than 0.75, “having a possible clinical application” if greater than 0.80 and “clinically meaningful” if greater than 0.85.

So we’re pretty proud of our results. Come check us out.