Firstly please keep in mind it is our policy to provide truthful information to Navic.cloud users to establish an honest and truthful relationship. Even at the risk of exposing our mistakes or shortcomings.
We are proud of what we’ve built and not shy to mention when we make mistakes.
We consider ourselves streets ahead of our competitors and know that generally they do not honour the above commitment, and no doubtedly will use this incident against us, rather than expose their own deficiencies.
To the matter at hand; You no doubt realise by now that Navic.cloud systems had technical difficulties whereby we suffered a partial but yet serious problem in our systems.
Most seriously it affected the Investigations tools and the logging of camera Reads into the database. We also shutdown access to the Navic.cloud User Interface to prevent potential inaccuracies or data loss. The problem also affected long term storage of non-VoI (Vehicles of Interest) Reads after the temporary queue storage limits would have been exceeded.
Due to the structure and the semi-independence of the various systems that make up the Navic.cloud ecosystem (as we have evolved it over the past 2 and a half years) means that mostly our systems continued their operation independently of the problem experienced.
For example the real-time reaction and reporting systems processing Navic.cloud Hits (Alertroom/NNOC) remained predominantly functional, through our backup structure, even when the primary Read logging system was down.
Also the Navic.cloud continued to receive camera Reads at greater than 99% efficiency in order to affect the above Alertroom.
We resolved the limiting factor that caused the problem. We rebuilt the database of about 2.2 billion records, and we now have a core system data structure that is more resilient, better performance, and provision for many more years of Reads from the sensors in the field.
The unexpected uptake of Navic’s services and the related huge growth in our camera network, VoI database and number of Reads have made our growth painful, but we will do better and continue to strive for perfection.
Thank you for your understanding and our apologies once again for any inconvenience.
Please note that from tomorrow onwards you will be able to gain access to support details, Navic.cloud ecosystem status, system feedback, planned maintenance, and a schedule for upcoming eMeetings – see top our our website at www.navic.cloud
Jason Berry and the whole of the Navic Team