For this thesis, you continue an existing project where we have tested TorchServe. However, TorchServe does not meet ourrequirements for performant, hardware optimized multi model serving and sequential pipelines. Our next look is towardsNvidia Triton, as it is native to Nvidia Jetson (our deployment hardware) since August 2021. Our software solutions can besequential multi model pipelines but also with parallel parts.As an example use case we have implemented a visual inspection solution using deep learning frameworks (PyTorch) andPython and all its corresponding software/code (Exported model, python code for handling). We would like to test specificdeep learning models that we have implemented on dummy data with many possible setup changes. Many different testingsetups and the influence of each setup change on performance results needs to be evaluated.
Specifics of the external work
We are looking for an interested person to thoroughly evaluate and compare deployment options on specialized hardwarelike Nvidia Jetson, providing performance examples on some of our given solutions.During the work, you will be employed by Heraeus for the duration of this work. You will have access to infrastructure anddata and work with experts from the areas of production, data science and digitalization as well as with various executivesand stakeholders. If you apply for this job, we ask you to provide us with a short motivation letter, your CV and your currentreference.
Knowledge in Python, Numpy, OpenCV, Docker, and Linux/Ubuntu. Ideal: Basic deep learning knowledge like pytorch,ONNX, GPU-hardware.