
Location, location, location data. Location is still paramount, but the old mantra now requires an additional data-centric addendum if it is to retain its relevance for years to come.
As we now define location data more formally in the broader realm of edge computing, it becomes difficult to know under what label to classify it. Is edge-bound location data an entity, a paradigm, or a source? Is it all three? In truth, it’s probably part of the edge fabric itself – i.e. the data within it describes and describes where edge computing occurs, as well as the need for edge sensors. board, cameras, accelerometers, etc.
SEE: Don’t Curb Your Excitement: Edge Computing Trends and Challenges (TechRepublic)
Building the layers of this part of the edge fabric requires a mapping source, and therein lies the challenge. The world is not perfect; we do not live between a set of uniform blocks and straight lines. As such, our vision and ability to map the world will always suffer from an element of imperfection and inaccuracy, which is less than ideal for demanding digital use cases.
Many Maps, One Planet
Over the past decade, location data has grown exponentially, and user expectations for location-based solutions have grown with it. This request has become difficult and costly for businesses. Today, companies must select one of many maps created from data representing the physical world, each with its own strengths and weaknesses.
However, what the market has been missing is a solution where all businesses and all devices can collaborate and communicate through a single digital representation of the physical world. So says Michael Harrell, vice president of software engineering at TomTom, a company known for its tracking technology, SatNav-like tracking devices, and – he hopes, maybe soon – for its progressive approach to location data.
The company is now looking to make the case for bringing together private and public location data. This mission hopes to create a new ecosystem for everyone to share and work on a better map of the world.
“Until now, tech companies had to buy their base map, services, and additional features from a single mapper,” Harrell said. “However, each of these mappers produces a proprietary map with little commonality, making it difficult, if not impossible, to mix and match the best services and features from multiple mappers and third parties. More importantly, innovation is determined by the mapmaker and the resources they are willing to devote to advancing their solution.”
Many tech companies that rely on richer, smarter cards to drive innovation have considered creating their own cards. However, it costs billions to succeed without creating core differentiation for their business.
Some organizations have instead turned to open mapping solutions like OpenStreetMap. OSM has grown tremendously over the past few years, producing and maintaining a visually appealing map with a wealth of detail, but Harrell noted that OSM has also presented challenges due to slower quality checks, which he called inferior routing, inconsistent standardization, and limited capacity. use automation.
These challenges have led some companies to only use OSM for their secondary and tertiary markets. Given that the approach to building a device-accessible planet must be fair and unwavering, this is not a positive.
Observations derived from sensors
“So the ideal solution for mapping relies on combining the best of proprietary mapping and open mapping,” Harrell offered. “This forces innovative mapmakers to leverage artificial intelligence and machine learning capabilities to verify and standardize open data before merging it with proprietary map data such as sensor-derived observations, probe data and thousands of other sources.”
Proprietary card vendors have the capabilities and expertise to identify quality issues. If something goes wrong, the data can be quarantined, cross-checked with other sources, and corrected accordingly.
At the same time, technology companies will still have the freedom to contribute and improve open mapping solutions like OSM without being constrained to the priority decisions of a single mapper.
“Proposing such an ecosystem that uses an open base map and standardization – while allowing any company to associate, license and market content added via independent map layers – would promote collaboration and data sharing. effective between a large number of companies and devices,” said Harrell. . “Companies around the world, large and small, could collaborate and acquire functionality from the base map while licensing additional content and functionality that they require. This in turn will free up company resources to focus on customer-specific innovation. »
Democracy of digital devices
In this theory, everyone and every device could collaborate and communicate through a unique digital representation of the physical world, as well as add additional details and proprietary information to meet the unique needs of their customers.
“While the combined resources of the world will dramatically speed up mapping, it will also allow tech companies to worry less about creating their own maps and just focus on turning location data into something useful – having the space, time, funds and resources to innovate, drive growth and stay competitive,” said Harrell.
It generally makes sense: use a mix of proprietary map data and open data to make the world a better mapped and safer place for everyone. There’s this fine example of a cliff walk by the ocean – where walkers were encouraged to post images of cliff erosion on social media every time they passed a sign encouraging them to do so – known as CoastSnap Project. This open data is combined with government data and some proprietary sources for a better planet.
The theory seems to hold up. Maybe one day we will say “location, location, open collaborative data” to the edge of the map.
Located here! If you are looking for more information on edge computing, this introduction and one brief history will provide useful insights.