Why robot touch is moving from data streams to spikes

Each time a robot fingertip measures continuous pressure values, it stands a chance of using more energy to transfer data than to interpret touch.

Image Credit to Dreamstime.com | Licence details

The paradox that is the main engineering conflict of neuromorphic tactile systems is that traditional designs continue to maintain a physical and logical separation between sensing and processing, such that arrays of distributed sensors need to transmit redundant raw signals to a central processor. In edge robotics and wearable mechatronics the consequence is foreseeable, which is bandwidth pressure, latency and power consumption that tend to scale poorly with the fineness of tactile resolution.

Instead of viewing touch as a stream, neuromorphic design views touch as a sequence of events. The interface also sends spikes when anything changes, as opposed to a fixed clock rate, which mimics biological mechanoreceptors. Practically, the transition is a matter of more than a software preference: systems based on spikes can only be meaningfully efficient once encoding, memory, and computation are co-designed in hardware. Here, one can encounter a well-known barrier. The majority of tactile transducers continue to produce analog voltages or changes of resistance and as a consequence, a conversion step is needed prior to spiking inference can be achieved and again, conversion can introduce the overhead that neuromorphic systems are supposed to remove.

The strategies of encoding determine the character of operation of the system. The simplest way is sometimes rate coding, where the intensity of the stimulus is related to the spike frequency, and the rate coding method is usually very robust to noise. Temporal techniques, including time-to-first-spike, can store information as accurate spike timing, requiring fewer spikes and lowering response time; however, they may be more affected by timing noise, and timing synchronisation limitations. Population coding spreads the meaning among ensembles, as many types of afferents provide complementary data on edges, vibration or slip. The engineering implication is that encoding is not a front-end format; it determines what the downstream network will be able to deduce at low power.

The hardware selection makes spikes either a convenient abstraction or a substrate. More straightforward circuitry can be used to produce frequency-modulated pulse trains by using oscillators (VCOs and ring oscillators) but the pulses tend to be of a rigid shape and may need further edge-detection stages to estimate sharp spiking events. Neuron-interfaced circuits, on the other hand, seek to bring integrate-and-fire dynamics to reality, eliminating the need for ADC-like conversion and being more consistent with event-driven inference.

The memristive devices have gained a centre of interest due to them being able to collapse memory and compute the device physics and hence provide leaky integrate-and-fire behaviour with low overhead. In touch scenarios, that is important to embedded hands and skins which can not support consistent data flow. Another related direction is self-powered mechanoreception, in which there is a coupling between energy harvesting and encoding. A triboelectric nanogenerator with a bistable resistor, together, has been demonstrated as one route, into a mechanoreceptor cell, with the ability to encode touch force into spikes without external supply; the work has reported 92.5% gesture classification accuracy when using array signals in simulations of a spiking-network (92.5%).

Event vision is also becoming convergent with event touch. A compliant skin can be equipped with an optical fingertip which encodes an event camera such that address-event streams are produced directly with pin displacements, and can thus be learned by a spiking neural network to extract features of the current shape, like edge orientation, through synaptic plasticity. In the view of one implementation, edge orientations were sorted in 10-degree steps by an unsupervised STDP learner (10-degree increments) which demonstrates the ability of tactile perception to be reconsidered as sparse spatiotemporal structure, instead of dense samples.

In all these prototypes, the design principle that seems to recur is architectural: Tactile intelligence works better when encoding, device physics, and decoding are considered a pipeline. The other bottleneck is integration creating systems in which sensing material, spike generation, synaptic adaptation, and inference hardware are constrained together, and not created as a sequence of conversions.

spot_img

More from this stream

Recomended

Discover more from Aerospace and Mechanical Insider

Subscribe now to keep reading and get access to the full archive.

Continue reading