Version 12 of Spotless Data's unique browseable API for data quality you can trust
We are excited and delighted to announce the long-awaited and greatly improved release of version 12 of Spotless Data's unique, browseable API solution to your rogue data issues, ensuring data quality you can trust again every time, thanks to our new improved Machine Learning filters which seamlessly filter out any dirty data and replace them with spotlessly clean data ready for use. We do this by allowing you the client to implement specifications which you have stipulated on your My Filters page, which you need to be logged in to see or if you have not used our service before please sign-up using your email address, facebook or github accounts and givve our service a try. We are sure you'll be impressed.
Youtube video demonstrating how to cleanse an EPG file using Spotless Data's Machine Learning filters
As a part of our version 12 release we have also just published this youtube video explaining in a clear, easy-to-understand manner how to use these filters, which learn through Machine Learning from their own experience of data cleansing and are therefore particularly useful to clients who keep coming back to have their new data cleaned or who have incorporated our API into the entry point to their data warehouse or data lake in order to ensure these work perfectly. So prepare to say goodbye to rogue data!
Re-designing of the website We have entirely redesigned our site, making it both more attractive and easier to view and navigate, focussing especially on our front page and our blogs. Please do take a look. We hope you like it!
Uniqueness rules We now support a new rule type. While a lot of rogue data issues are fairly common and appear time and again we have found that there are also some unusual types of dirty data and this rule type will allow these less common rogue data issues to be addressed.
Session rules These are used to cleanse both gaps and overlaps in any time series session data, and we have now improved the support for this type of rule.
Lookup rules are used to include best matches and to fill in blanks It is not always possible to perfectly match inaccurate data and there are many occasions in which a best match is better than no match at all. You cn specify these using your My Filters page specifications. While removing blank entries can skew a whole column of data, a blank entry in your data is often inappropriate and may leave your customers and/or employees scratching their heads and wondering what has happened. So lookup rules allow the filling in of these blank spaces, typically by using the best match available. This filling in of the blanks is a particularly cool function in our opinion. Try it out and see if you agree!
Increased number of file encodings and range of file separators supported We now support a greater range of file encodings or character encodings than we did before. We also allow a greater range of file separators and not just comma separated value (CSV) files. Both these improvements make it easier than ever to use our filters.
We now offer 500Mb of free data cleansing Realising that you will want to test our data cleansing service to check for yourself that it really is as good as we say it is we had already offered 100Mb of data cleansing when you start to use our unique Machine Learning filters but have now increased this to 500Mb as we recognise that many businesses have a whole range of different and challenging dirty data issues and that 100Mb of free data cleansing may not be enough to test our service against all your different types of rogue data and for this reason we now offer 500Mb instead. Simply sign up to our service using your email address or your facebok or github account. You can check out our Pricing and range of Subscription Packages.
Many thanks to the hard effort of the Spotless Data Science team in implementing this new, improved version. Have a great day!
If your data quality is an issue or you know that you have known sources of dirty data but your files are just too big, and the problems too numerous to be able to fix manually please do log in and try now