Q&A with the team behind the award-winning PaintCloud
The pandemic has reinforced the critical need for businesses to enable their employees to work efficiently outside of the conventional office. While the threat of COVID-19 has abated in Australia, restrictions on gatherings and interstate travel remain, challenges businesses nation-wide must overcome to resume near, or new, normal operations.
A decade’s worth of digital transformation occurred within the space of a few months during 2020, with organisations now presented with a slew of innovative products and services that will underpin their rapid adaption as the digital revolution advances.
One such product is PaintCloud, a piece of cutting-edge technology that was recently awarded the 2020 QLD iAward for Government and Public Sector Solution of the Year. PaintCloud is a cost-effective solution for 3D mobile mapping businesses and users to create colourised maps without needing to replace costly and functioning 3D mapping equipment.
By linking 3D range information to real-world colours detected by a commercial off the shelf camera, such as a GoPro, mounted on a handheld, aerial or ground system used to capture the data, PaintCloud allows users who are unable to access an area a highly-detailed look at on-site conditions.
PaintCloud can be applied to a range of scenarios including mining, construction, manufacturing, surveying, agriculture and heritage building analysis. For example, a scan of a commercial building under construction by remote access users would tell them the paint colour used in the main office, the tile pattern in the lobby, or if there’s been any structural damage following an extreme weather event.
Here, the Robotics and Autonomous Systems Group (RASG) research team within CSIRO’s Data61 deep dive into the capabilities, features, design process and potential impact of this award-winning application.
What is PaintCloud and what makes it unique?
PaintCloud is a piece of software that simplifies the process of creating a coloured 3D point cloud (a set of data points in a space, with points representing a 3D shape or object) from video and image data capture technologies. For example, a site manager in Sydney could take a 3D+ PaintCloud scan of a building undergoing construction and digitally send it to the Perth-based architect.
The architect would be able to see a detailed and interactive map showcasing everything from the colour of paint used on the walls in the main office, if the stairs in the lobby are crafted from oak or ash, and the pattern of the tiles in the kitchen. Users can see the answers to these questions by simply navigating within a 3D map augmented with colour.
This enriched representation is paramount for the understanding of point clouds in a number of application scenarios, and because PaintCloud is extremely portable, it can be used for data captured using ‘plug-and-play’ sensor payloads mounted to hand-held, aerial and ground systems.
What distinguishes PaintCloud from existing static or mobile 3D mapping solutions which combine cameras with depth sensors like LiDAR (light detection and ranging technology) is that PaintCloud imposes no restrictions on the type of camera, their number or placement, or the software used to produce the original uncoloured 3D point cloud.
What was involved in the creation of PaintCloud?
The first major challenge with PaintCloud was to identify which points in the point cloud the camera can observe from a given position. A camera has far greater range than a LiDAR, allowing it to observe objects well before the LiDAR observes the same object.
Similarly, the placement of the camera relative to the LiDAR causes problems when an object seen by the LiDER is out of camera view or vice versa. The typical approach to identifying visible structures from an observer position is raycasting, however, raycasting requires surface or volumetric information which is not present in point clouds.
Point clouds are a set of 3D points which have no volume. Despite this primitive representation, PaintCloud includes new research which is not only able to estimate the visible points from an observer position but also hallucinate the surface which connects them.
The first use case for PaintCloud was to create a coloured point cloud from data captured using a handheld mobile 3D mapping device which had a GoPro camera retrofitted to it. This scenario meant that the camera was started and stopped independently of the mapping device.
The hardware independence also meant there was no way to compute the precise 3D location of the camera for each frame of video as the GoPro and the 3D mapping device use different clocks for timing. PaintCloud overcomes the lack of synchronisation by estimating a transform which correlates the motion observed in both data streams.
What scenarios can PaintCloud be applied to?
The primary objective of PaintCloud is to add colour to colour-less point clouds. Colour complements the structure contained in a 3D scan in a hugely important way. Imagine staring at a painting and only being able to perceive depth. All the beauty recorded in colour by the artist is lost in the uncoloured point cloud. By using PaintCloud to add colour to structure, people and autonomous systems can infer more from the data, and in some situations, perceive something that was otherwise invisible when using a camera or LiDAR alone.
It is important to note that PaintCloud is not limited to cameras like those found in modern smart phones. PaintCloud can be used with thermal cameras to perform tasks such as the search and rescue of humans or other warm-blooded animals in emergency situations. It can also be used to colourise a 3D scan with spectral signatures obtained using hyperspectral cameras.
Such a scan would be useful for monitoring the health of trees in a plantation or vegetation near important infrastructure such as power lines.
How has PaintCloud been used so far?
Early development of PaintCloud was performed in collaboration with GeoSLAM, Data61’s joint venture in mobile 3D mapping. PaintCloud is also in use at a CSIRO’s Data61 spin-out company, Emesent, via our Early Adopter Program.
PaintCloud has also been used as part of Data61’s participation in the DARPA Subterranean Challenge, where a fleet of autonomous robots are tasked with exploring a subterranean environment and identifying various types of objects and accurately reporting their location.
What value can engineering and design bring to researchers and how can research programs benefit from it?
PaintCloud is the result of a long and effective collaboration between researchers and engineers within Data61’s Robotics and Autonomous Systems Group (RASG). This collaboration has been vital for many technologies in addition to the creation and maturation of PaintCloud.
It’s hard to imagine building advanced mobile 3D mapping software and hardware or next generation autonomous systems without leveraging research and engineering capability simultaneously. A wider collaboration between the Engineering and Design Group at Data61 E&D and RASG would allow this innovation to reach new audiences and achieve greater impact. New research and engineering goals would also be pursued together, enabling exciting opportunities for PaintCloud that were not possible before.
What impact could PaintCloud to have?
PaintCloud provides a flexible and cost-effective option for businesses to add cameras to existing camera-less mapping hardware, enabling them to rapidly update their capabilities and advance their service offerings and products. We see PaintCloud being greatly beneficial to the robotics community as they continue to pursue higher level tasks which rely on 3D information as well as other image sensors. PaintCloud’s flexibility in placing sensors will allow robotic platforms to be designed for the application rather than for 3D mapping.
One of the future directions we are excited about is using PaintCloud in conjunction with machine learning and artificial intelligence. Recent advances in embedded hardware permit heavy compute tasks such as image scene segmentation and object detection to execute in real time on edge devices. The output of these “smart cameras” can also be used by PaintCloud and the result is a point cloud painted with semantics.
Discover what PaintCloud can do for your business by contacting us here.
July 23, 2021 at 9:15 am
I wonder if this technology could be used to make instrument flight, or at least take-offs and landings, irrelevant. I imagine the pilot being able to see an accurate replica, in real time, of what is outside the aeroplane no matter whether in a deep fog at night or some other restricted visibility environment.