Ruslan Abdulin

Embedded Software Engineer with experience developing safety-critical software in real-time, multi-threaded, and partitioned environments for IoT devices, avionics, wireless communication systems, and robotics.

On-Board Computer for a drone, SAE international competition 2026

Close-up to the drone's circuitry with the PCB installed
PCB electric schematic
CSUN AERO 2026 drone
On-board computer connected to the corresponding peripherals
Close-up to the PCB with the computer

The onboard computer is responsible for UART communication with any flight controller that supports the MAVLink protocol in order to promptly pick up a dynamic payload upon landing. The computer utilizes the combination of a state-machine approach and an RTOS to ensure the pick-up procedure takes place with correct timing, allowing potential repetition of the mission without rebooting. The procedure includes sending a variety of infrared commands, lowering the gate until it touches the ground, recognizing that the payload is inside, raising the gate with the payload inside, and signaling to the drone that it is time to take off.

The computer is placed on a custom soldered circuit. The circuit(PCB prototype) has gone through the bring-up and hardware-in-the-loop testing. Additionally, I debugged the code using JTAG connection via OpenOCD with GDB. The circuitry enables reliable connections between peripherals and the computer, limits pick-up mechanism extraction by utilizing a DC-encoder, as well as debounces GPIO pins using an RC-based low-pass filter.

Autonomous Dynamic Payload, SAE international competition 2026

On-board computer connected to the corresponding peripherals
Close-up to the PCB with the computer
Close-up to the PCB with the computer

The dynamic payload is responsible for autonomously finding a path to the pick-up mechanism located on the bottom of the drone. The payload utilizes contrast-based line detector sensor, DC motors, camera that actively looks for a reflective tape wrapped around the pick-up mechanism and a flashlight for consistent lighting and therefore increased reliability of the computer vision algorithm.

There is a PCB specifically designed for integration of the peripherals with the MCU(ESP32S3)

Internship, Ai-powered device for people with dementia

The prototype
ESP32 Camera confirmed the proximity to the BLE and is transmitting its output to the local server

At Guardinova LLC, I was responsible for integrating the ESP32S3 MCU with a OV3660 camera to stream its output via WiFi (connected as a station) to an HTTP server whenever the microcontroller is close (proximity is calculated based on RSSI using NimBLE) to a BLE Beacon.

  • The code is not available for public disclosure

STM32 driver for radio communication

Two STM32F407-Disc1 boards that are communicating with each other over radio modules
External debugger connected to one of the boards via the SWD ports

By following the official NRF24L01+ documentation, I've implemented a fully-configurable driver for TX and RF radio communication with STM32F407 Discovery board. The setup requires the HAL library and SPI connections between the board and the radio module. The driver was debugged using an external SWD debugger via OpenOCD with GDB.

The driver features dynamic payload sizing, adressing, CRC, auto-acknowledgement, and auto-retransmission for a custom amount of times in case no-ACK is received.

Bare-metal GPIO and SPI drivers development for STM32

In accordance with the official documentation for the STM32F407-Disc1 board, I have implemented and well documented a bare-metal GPIO and SPI drivers. The drivers include a set of macros fully aligning with the memory map as well as bit positions and values for each register.

The drivers provide both polling and ISR-based APIs with callback functions.

STM32 driver to interface LCD with I2C adapter

The configurable driver for STM32 boards enables both polling and ISR-based data transfers to a Liquid Crystal Display via an I2C module which reduces the total number of pins.

Computer Vision-based obstacle avoidance

At CSUN Advanced System Labs, I implemented an object detection algorithm that computes object vertex coordinates relative to the camera’s origin. The vertices are derived using an approximation based on combined convex hulls. The convex hulls are obtained from Luxonis OAK-D depth camera that is connected to a Linux-based NVIDIA Jetson board for processing. The vertices are passed to a pathfinding algorithm, which determines next camera positions to enable real-time obstacle avoidance.

Additionally, I was responsible for camera calibration using a checkerboard pattern.

FDIA detection and localization using Machine Learning algorithms

At California State University of Northridge, I work as a Research Assistant with Professor Narimani, Rasoul. I have drafted 3 papers and designed multiple supervised ML models: Attention enchanced GCN model for shortest path finding, ARMAConv-based Encoder-only Transformer for power networks cyberattack detection and localization.