Sam Hind: Sensor strategies: Finessing the ‘inter-operation’ of autonomous vehicles
Vehicles are becoming increasingly dependent on the capture, storage, and analysis of actively sensed data. This paper will consider how sensor data generated by autonomous vehicles (AVs) is put to work. Rather than considering sensor data as only, or strictly, ‘operational’, I want to suggest that it be conceived of as both interoperable and integral to the interoperation of AVs. In short, that sensor data must be pushed through, made compatible with, and prepared for, a range of different systems and processes for it to contribute to the decision-making capabilities of AVs. I discuss what is meant by interoperability with the help of Adrian MacKenzie and Anna Munster’s work on ‘platform seeing’, and Anthony McCosker and Rowan Wilken’s idea of ‘camera consciousness’. I then draw on five processes encountered in research into machine vision in this AV context: streaming processing optimization, depth sensor processing, 3D object detection, lidar point segmentation, and lidar point attenuation. Each, I argue, draws attention to the emergent ‘sensor strategies’ devised to deal with these interoperable issues: of tackling ‘stale’ video frames, of correcting missing pitch and roll annotations, of visualizing occluded objects, of balancing a trade-off between over or under-segmentation, and of removing the interference of ‘spurious’ objects like rain droplets and dust particles. To understand them, I draw on the work of anthropologist Michael Fisch, suggesting that machine vision researchers ‘finesse’ interoperability; applying and refining their skills within an automotive domain to achieve an acceptable level of interoperation.