5568

FallGuard is an innovative project designed to detect falls in a privacy-respecting and efficient way using Edge AI technology. The system is built on the STM32N6570-DK development board paired with the MB1854B camera, enabling intelligent local processing without relying private images to the cloud—ensuring maximum privacy for the user.


Why Edge AI?

Edge AI brings the power of artificial intelligence directly to embedded devices like the STM32N6570-DK. By processing data locally—on the edge—FallGuard eliminates the need to send sensitive video streams over the internet. This not only protects user privacy but also reduces latency, increases reliability, and minimizes network usage. These advantages make Edge AI the perfect choice for real-time, privacy-sensitive applications such as fall detection.

Who is FallGuard for?

FallGuard is particularly useful in settings where monitoring for falls is critical, such as:

  • Elderly individuals living alone

  • Hospitals and nursing homes

  • Rehabilitation centers

  • Smart home environments for assisted living

By offering real-time fall detection with automatic notifications, FallGuard can provide peace of mind to caregivers, enhance safety for vulnerable individuals, and support timely medical intervention when needed.

Detection Approaches

The project demonstrates and evaluates two distinct AI-based approaches to fall detection:

  1. Object Detection

    • Uses a self-trained model based on ST SSD MobileNet V1

    • Detects the presence and position of a person, and infers falls from being visual trained of what a fall looks like

| Class name | Precision % | Recall % | AP % | |:-------------|:-------------:|:------------:|:-----------:| | person | 100 | 4.9 | 9.8 | | fall | 93.8 | 20.4 | 35.2 |

Averages over classes %: ----------------------- Mean precision: 96.9 Mean recall: 12.7 Mean AP (mAP): 22.5

[INFO] : Establishing a connection to STM32Cube.AI Developer Cloud to launch the model benchmark on STM32 target... [INFO] : Successfully connected! [INFO] : Starting the model benchmark on target STM32N6570-DK [INFO] : Total RAM : 1643.23 (KiB) [INFO] : RAM Activations : 1641.5 (KiB) [INFO] : RAM Runtime : 1.73 (KiB) [INFO] : Total Flash : 2197.51 (KiB) [INFO] : Flash Weights : 1640.2 (KiB) [INFO] : Estimated Flash Code : 557.31 (KiB) [INFO] : MACCs : 0.0 (M) [INFO] : Internal RAM usage : 1641.5 (KiB) [INFO] : External RAM usage : 0.0 (KiB) [INFO] : Number of cycles : 17.074 (M) [INFO] : Inference Time : 21.34 (ms)



  1. Pose Estimation

    • Analyzes joint positions (e.g., nose, ear, mouth)

    • Calculates motion vectors to detect unnatural movements associated with falling

/* Fall detection with 10-frame history */ if(initialized_frames >= FRAME_HISTORY) { int hist_idx = (frame_index + FRAME_HISTORY - 10) % FRAME_HISTORY;

for(int i = 0; i < info->nb_detect; i++) { mpe_pp_keyPoints_t curr_nose = info->detects[i].pKeyPoints[0]; mpe_pp_keyPoints_t hist_nose = nose_history[hist_idx][i];

if(curr_nose.conf > AI_MPE_YOLOV8_PP_CONF_THRESHOLD && hist_nose.conf > AI_MPE_YOLOV8_PP_CONF_THRESHOLD) { // Convert coordinates int x_curr, y_curr, x_hist, y_hist; convert_point(curr_nose.x, curr_nose.y, &x_curr, &y_curr); convert_point(hist_nose.x, hist_nose.y, &x_hist, &y_hist);

// Calculate motion vector float dx = x_curr - x_hist; float dy = y_curr - y_hist; float length = sqrtf(dx*dx + dy*dy);

// Display vector length int box_x0 = (int)(lcd_bg_area.XSize * (info->detects[i].x_center - info->detects[i].width/2)); int box_y0 = (int)(lcd_bg_area.YSize * (info->detects[i].y_center - info->detects[i].height/2)); UTIL_LCDEx_PrintfAt(box_x0, box_y0 - 20, LEFT_MODE, "%.0fpx", length);

// Fall detection (vertical movement >10px over 10 frames) if(dy > 80) { UTIL_LCDEx_PrintfAt(0, 10, LEFT_MODE, "Fall detected!"); printf("Fall detected!\n\r"); /* Allocate the memory for MQTT client thread */ data.nb_detect = nb_rois; data.timestamp = GetRtcEpoch(); data.state_detect = 13; //Fall detected! /* Send to MQTT thread */ if (prev_state_detect != 13){ tx_queue_send(&measurement_queue, &data, TX_NO_WAIT); prev_state_detect = 13; }

} } } }




By comparing both techniques, the project aims to identify the most effective and efficient method for real-world deployment.


Additional Features

  • MQTT Connectivity: Sends fall alerts and status updates over the onboard network

  • Custom IoT Sensor Service: Allows to manage sensors and sensor data

  • Automation with Node-RED:

    • Sends notifications via Pushbullet and Slack

    • Logs data to InfluxDB

    • Visualizes activity through Grafana dashboards

  • HLK-LD2410 Sensor Integration:

    • Detects human presence

    • Purpose is to switch on lighting when dark and a person is present. This to ensure safety and allow the camera track the persons in the room.


Open Source

FallGuard is open-source and available on GitHub: