What limits the progress of drones today is no longer merely their airframe. With its latest release of the updated Prism SKR, the Teledyne FLIR aims to show another turn in the evolution of unmanned systems: from remote controlled vehicles and airframe optimization towards autonomous perception, mission specific reasoning, and reliable software capable of continuing operation in adverse network conditions. This new version of the drone payload transforms the Prism SKR from an automated targeting device to an autonomy oriented package suitable for both FPV applications and loitering munition, interceptor, and counter UAS drones.

It is important to note here because a majority of drone operations face a similar challenge in the last stages of the flight process: either the signal starts to degrade significantly, or the drone loses the ability to maintain visual contact with the target. The pixel lock targeting functionality of Prism SKR was specifically created to address this particular problem. Upon successfully locking onto a certain object, the drone would automatically continue following it without interruptions due to any potential interference or loss of signal, through persistent re-identification instead of relying upon the uninterrupted connection to the controller. In other words, the drone is now performing its mission autonomously.
But there is more interesting about the software architecture in question than its advertised targeting feature. According to Teledyne FLIR, the updated platform can transition from assisted artificial intelligence during handover to fully fledged mission execution, supplemented by the drone’s increased terrain awareness and the ability to pick up 3D aimpoints. It appears that the designers behind this solution see drone autonomy not in binary terms, but rather as a hierarchy of controls wherein humans provide initial goals while the computer executes mission in dynamic conditions via onboard processing.
This particular trend reflects a general pattern in drone computing that has been forming for years already: as the amount of computation being done locally continues to grow, edge devices start requiring significant amounts of DRAM, flash memory, and onboard data storage capabilities. For example, processing 1080p video streams in real time would consume about 1 to 2 GB of RAM and make onboard analytics more effective in the event of degraded or lost connection to the operator. Thus, an autonomy oriented software platform running on relatively low power onboard computing is as much an issue of systems engineering as of algorithms.
And sensor fusion is related as well. Studies devoted to UAV detection have established that purely optical cameras are prone to missing objects during darkness, fog, heavy rains, and occlusions, which is why dual modal imaging is needed to achieve more accurate results. In a paper released in early 2024 and based on the DroneVehicle dataset comprising 28,439 aligned RGB and infrared image pairs, researchers used a transformer based visible thermal approach to reach the mean average precision rate of 75.5%. The result proved to be superior to all other methods in adverse conditions, such as nighttime, fog, and occlusion. So, the company’s claim to support both IR and visible sensors is well justified.
But perhaps, the most important thing here is the least obvious one. The updated payload is made to fit smoothly into the existing drone workflow, including seamless integration with QGroundControl and AI assisted coding. It lowers the gap between autonomy in the laboratory and in the field. Or in other words, the competitive advantage lies in the ability of drones to continue perceiving, deciding, and tracking in degraded network environment.
