Compare commits

...

8 Commits

Author SHA1 Message Date
Nicolas Mowen
e6cbc93703
More stationary cleanup (#20229)
* Always return false for active objects

* Cleanup
2025-09-26 07:23:29 -06:00
GaryHuang-ASUS
b8b07ee6e1
[Init] Initial commit for Synaptics SL1680 NPU (#19680)
* [Init] Initial commit for Synaptics SL1680 NPU

* add a rough detector which is testing with yolov8 tflite model.

* [Feat] Add dependencies installation in docker build

- Add runtime library and wheels installation in main/Dockerfile
- Add model.synap(default model, transfer from mobilenet_224full80) in docker/synap1680

* [Update] Remove dependencies installation from main Dockerfile

- remove deps installation from Dockerfile
- add dependencies installation and split wheels, deps stage in synap1680 Dockerfile

* Refactor synap detector to more closely match other implementations

* [Update] Add model path configuration check

* [Update] update ModelType to ssd

* [Update] Remove unuse script

- install_deps.sh has already been executing in deps download stage
- Dockerfile.toolchain is for testing to extract runtime libraries from Synaptics toolchain

* [Update] update Synaptics SL1680 setup description

* [Update] remove install_synap1680

- The deps download and installation is existed in synap1680

* [Fix] update document content

* [Update] Update detector from synap1680 to synaptics

This update is in order to make the synaptics SL-series NPU detector more general.

- Fix detector `os` module not import bug
- Update detector type `synap1680` to `synaptics`
- Update document description `SL1680` to `Synaptics` only
- Update docker build content `synap1680` to `synaptics`

* [Fix] Update configuration document

* Update docs/docs/configuration/object_detectors.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* [Update] Update document content and detector default layout

- Update object_detectors document
- Update detector's default layout
- Update default model name

* [Update] Update object detector document content

* [Fix] Fix InputTensorEnum not defined error

- import InputTensorEnum from detector_config

* [Update] Update detector script coding format

* [Update] Update synaptics detector coding format

* [Update] Add synaptics ci workflow

* [Update] update synaptics runtime libs download path

- Fork Synaptics astra sdk repo and put the runtime lib package on it
- Frigate team can update this download path later

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-09-26 07:07:12 -05:00
Nicolas Mowen
082867447b
Stationary bug fixes (#20225)
* Correctly only enable for car

* Fix limiting stationary objects history
2025-09-26 07:03:59 -05:00
Nicolas Mowen
8b293449f9
Improve review summary (#20216)
* Add debug logging for review summaries report

* Improve debug logging

* Improve review report prompt

* Cleanup

* Add date to report
2025-09-25 21:05:22 -05:00
Nicolas Mowen
2f209b2cf4
Implement stationary car classifier to improve parked car management (#20206)
* Implement stationary car classifier to base stationary state on visual changes and not just bounding box stability

* Cleanup

* Fix mypy

* Move to new file and add config to disable if needed

* Cleanup

* Undo
2025-09-25 10:18:45 -05:00
Nicolas Mowen
9a22404015
Use devcontainer build to run tests (#20212)
* Use devcontainer build to run tests

* Make ignored github changes more restrictive
2025-09-25 09:59:18 -05:00
Nicolas Mowen
2c4a043dbb
Update go2rtc to 1.9.10 (#20202) 2025-09-25 06:15:04 -05:00
Nicolas Mowen
b23355da53
Update apple silicon docs (#20204) 2025-09-25 06:12:35 -05:00
24 changed files with 714 additions and 125 deletions

View File

@ -173,6 +173,31 @@ jobs:
set: | set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha *.cache-from=type=gha
synaptics_build:
runs-on: ubuntu-22.04-arm
name: Synaptics Build
needs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Synaptics build
uses: docker/bake-action@v6
with:
source: .
push: true
targets: synaptics
files: docker/synaptics/synaptics.hcl
set: |
synaptics.tags=${{ steps.setup.outputs.image-name }}-synaptics
*.cache-from=type=gha
# The majority of users running arm64 are rpi users, so the rpi # The majority of users running arm64 are rpi users, so the rpi
# build should be the primary arm64 image # build should be the primary arm64 image
assemble_default_build: assemble_default_build:

View File

@ -4,38 +4,14 @@ on:
pull_request: pull_request:
paths-ignore: paths-ignore:
- "docs/**" - "docs/**"
- ".github/**" - ".github/*.yml"
- ".github/DISCUSSION_TEMPLATE/**"
- ".github/ISSUE_TEMPLATE/**"
env: env:
DEFAULT_PYTHON: 3.11 DEFAULT_PYTHON: 3.11
jobs: jobs:
build_devcontainer:
runs-on: ubuntu-latest
name: Build Devcontainer
# The Dockerfile contains features that requires buildkit, and since the
# devcontainer cli uses docker-compose to build the image, the only way to
# ensure docker-compose uses buildkit is to explicitly enable it.
env:
DOCKER_BUILDKIT: "1"
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
run: devcontainer build --workspace-folder .
# It would be nice to also test the following commands, but for some
# reason they don't work even though in VS Code devcontainer works.
# - name: Start devcontainer
# run: devcontainer up --workspace-folder .
# - name: Run devcontainer scripts
# run: devcontainer run-user-commands --workspace-folder .
web_lint: web_lint:
name: Web - Lint name: Web - Lint
runs-on: ubuntu-latest runs-on: ubuntu-latest
@ -102,13 +78,18 @@ jobs:
uses: actions/checkout@v5 uses: actions/checkout@v5
with: with:
persist-credentials: false persist-credentials: false
- name: Set up QEMU - uses: actions/setup-node@master
uses: docker/setup-qemu-action@v3 with:
- name: Set up Docker Buildx node-version: 20.x
uses: docker/setup-buildx-action@v3 - name: Install devcontainer cli
- name: Build run: npm install --global @devcontainers/cli
run: make debug - name: Build devcontainer
- name: Run mypy env:
run: docker run --rm --entrypoint=python3 frigate:latest -u -m mypy --config-file frigate/mypy.ini frigate DOCKER_BUILDKIT: "1"
- name: Run tests run: devcontainer build --workspace-folder .
run: docker run --rm --entrypoint=python3 frigate:latest -u -m unittest - name: Start devcontainer
run: devcontainer up --workspace-folder .
- name: Run mypy in devcontainer
run: devcontainer exec --workspace-folder . bash -lc "python3 -u -m mypy --config-file frigate/mypy.ini frigate"
- name: Run unit tests in devcontainer
run: devcontainer exec --workspace-folder . bash -lc "python3 -u -m unittest"

View File

@ -55,7 +55,7 @@ RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
FROM scratch AS go2rtc FROM scratch AS go2rtc
ARG TARGETARCH ARG TARGETARCH
WORKDIR /rootfs/usr/local/go2rtc/bin WORKDIR /rootfs/usr/local/go2rtc/bin
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.9/go2rtc_linux_${TARGETARCH}" go2rtc ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.10/go2rtc_linux_${TARGETARCH}" go2rtc
FROM wget AS tempio FROM wget AS tempio
ARG TARGETARCH ARG TARGETARCH

View File

@ -0,0 +1,28 @@
# syntax=docker/dockerfile:1.6
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM wheels AS synap1680-wheels
ARG TARGETARCH
# Install dependencies
RUN wget -qO- "https://github.com/GaryHuang-ASUS/synaptics_astra_sdk/releases/download/v1.5.0/Synaptics-SL1680-v1.5.0-rt.tar" | tar -C / -xzf -
RUN wget -P /wheels/ "https://github.com/synaptics-synap/synap-python/releases/download/v0.0.4-preview/synap_python-0.0.4-cp311-cp311-manylinux_2_35_aarch64.whl"
FROM deps AS synap1680-deps
ARG TARGETARCH
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=synap1680-wheels,source=/wheels,target=/deps/synap-wheels \
pip3 install --no-deps -U /deps/synap-wheels/*.whl
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY --from=synap1680-wheels /rootfs/usr/local/lib/*.so /usr/lib
ADD https://raw.githubusercontent.com/synaptics-astra/synap-release/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80/model.synap /synaptics/mobilenet.synap

View File

@ -0,0 +1,27 @@
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "wheels"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "rootfs"
}
target synaptics {
dockerfile = "docker/synaptics/Dockerfile"
contexts = {
wheels = "target:wheels",
deps = "target:deps",
rootfs = "target:rootfs"
}
platforms = ["linux/arm64"]
}

View File

@ -0,0 +1,15 @@
BOARDS += synaptics
local-synaptics: version
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=frigate:latest-synaptics \
--load
build-synaptics: version
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-synaptics
push-synaptics: build-synaptics
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-synaptics \
--push

View File

@ -177,9 +177,11 @@ listen [::]:5000 ipv6only=off;
By default, Frigate runs at the root path (`/`). However some setups require to run Frigate under a custom path prefix (e.g. `/frigate`), especially when Frigate is located behind a reverse proxy that requires path-based routing. By default, Frigate runs at the root path (`/`). However some setups require to run Frigate under a custom path prefix (e.g. `/frigate`), especially when Frigate is located behind a reverse proxy that requires path-based routing.
### Set Base Path via HTTP Header ### Set Base Path via HTTP Header
The preferred way to configure the base path is through the `X-Ingress-Path` HTTP header, which needs to be set to the desired base path in an upstream reverse proxy. The preferred way to configure the base path is through the `X-Ingress-Path` HTTP header, which needs to be set to the desired base path in an upstream reverse proxy.
For example, in Nginx: For example, in Nginx:
``` ```
location /frigate { location /frigate {
proxy_set_header X-Ingress-Path /frigate; proxy_set_header X-Ingress-Path /frigate;
@ -188,9 +190,11 @@ location /frigate {
``` ```
### Set Base Path via Environment Variable ### Set Base Path via Environment Variable
When it is not feasible to set the base path via a HTTP header, it can also be set via the `FRIGATE_BASE_PATH` environment variable in the Docker Compose file. When it is not feasible to set the base path via a HTTP header, it can also be set via the `FRIGATE_BASE_PATH` environment variable in the Docker Compose file.
For example: For example:
``` ```
services: services:
frigate: frigate:
@ -200,6 +204,7 @@ services:
``` ```
This can be used for example to access Frigate via a Tailscale agent (https), by simply forwarding all requests to the base path (http): This can be used for example to access Frigate via a Tailscale agent (https), by simply forwarding all requests to the base path (http):
``` ```
tailscale serve --https=443 --bg --set-path /frigate http://localhost:5000/frigate tailscale serve --https=443 --bg --set-path /frigate http://localhost:5000/frigate
``` ```
@ -218,7 +223,7 @@ To do this:
### Custom go2rtc version ### Custom go2rtc version
Frigate currently includes go2rtc v1.9.9, there may be certain cases where you want to run a different version of go2rtc. Frigate currently includes go2rtc v1.9.10, there may be certain cases where you want to run a different version of go2rtc.
To do this: To do this:

View File

@ -231,7 +231,7 @@ go2rtc:
- rtspx://192.168.1.1:7441/abcdefghijk - rtspx://192.168.1.1:7441/abcdefghijk
``` ```
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-rtsp) [See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-rtsp)
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect. In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect.

View File

@ -427,3 +427,29 @@ cameras:
``` ```
::: :::
## Synaptics
Hardware accelerated video de-/encoding is supported on Synpatics SL-series SoC.
### Prerequisites
Make sure to follow the [Synaptics specific installation instructions](/frigate/installation#synaptics).
### Configuration
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
```yaml
ffmpeg:
hwaccel_args: -c:v h264_v4l2m2m
input_args: preset-rtsp-restream
output_args:
record: preset-record-generic-audio-aac
```
:::warning
Make sure that your SoC supports hardware acceleration for your input stream and your input stream is h264 encoding. For example, if your camera streams with h264 encoding, your SoC must be able to de- and encode with it. If you are unsure whether your SoC meets the requirements, take a look at the datasheet.
:::

View File

@ -43,6 +43,10 @@ Frigate supports multiple different detectors that work on different types of ha
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs. - [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
**Synaptics**
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
**For Testing** **For Testing**
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results. - [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
@ -449,12 +453,13 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
::: :::
After placing the downloaded onnx model in your config folder, you can use the following configuration: When Frigate is started with the following config it will connect to the detector client and transfer the model automatically:
```yaml ```yaml
detectors: detectors:
onnx: apple-silicon:
type: onnx type: zmq
endpoint: tcp://host.docker.internal:5555
model: model:
model_type: yolo-generic model_type: yolo-generic
@ -1048,6 +1053,41 @@ model:
height: 320 # MUST match the chosen model i.e yolov7-320 -> 320 yolov4-416 -> 416 height: 320 # MUST match the chosen model i.e yolov7-320 -> 320 yolov4-416 -> 416
``` ```
## Synaptics
Hardware accelerated object detection is supported on the following SoCs:
- SL1680
This implementation uses the [Synaptics model conversion](https://synaptics-synap.github.io/doc/v/latest/docs/manual/introduction.html#offline-model-conversion), version v3.1.0.
This implementation is based on sdk `v1.5.0`.
See the [installation docs](../frigate/installation.md#synaptics) for information on configuring the SL-series NPU hardware.
### Configuration
When configuring the Synap detector, you have to specify the model: a local **path**.
#### SSD Mobilenet
A synap model is provided in the container at /mobilenet.synap and is used by this detector type by default. The model comes from [Synap-release Github](https://github.com/synaptics-astra/synap-release/tree/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80).
Use the model configuration shown below when using the synaptics detector with the default synap model:
```yaml
detectors: # required
synap_npu: # required
type: synaptics # required
model: # required
path: /synaptics/mobilenet.synap # required
width: 224 # required
height: 224 # required
tensor_format: nhwc # default value (optional. If you change the model, it is required)
labelmap_path: /labelmap/coco-80.txt # required
```
## Rockchip platform ## Rockchip platform
Hardware accelerated object detection is supported on the following SoCs: Hardware accelerated object detection is supported on the following SoCs:

View File

@ -287,6 +287,9 @@ detect:
max_disappeared: 25 max_disappeared: 25
# Optional: Configuration for stationary object tracking # Optional: Configuration for stationary object tracking
stationary: stationary:
# Optional: Stationary classifier that uses visual characteristics to determine if an object
# is stationary even if the box changes enough to be considered motion (default: shown below).
classifier: True
# Optional: Frequency for confirming stationary objects (default: same as threshold) # Optional: Frequency for confirming stationary objects (default: same as threshold)
# When set to 1, object detection will run to confirm the object still exists on every frame. # When set to 1, object detection will run to confirm the object still exists on every frame.
# If set to 10, object detection will run to confirm the object still exists on every 10th frame. # If set to 10, object detection will run to confirm the object still exists on every 10th frame.
@ -697,7 +700,7 @@ audio_transcription:
language: en language: en
# Optional: Restream configuration # Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.9) # Uses https://github.com/AlexxIT/go2rtc (v1.9.10)
# NOTE: The default go2rtc API port (1984) must be used, # NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported. # changing this port for the integrated go2rtc instance is not supported.
go2rtc: go2rtc:

View File

@ -7,7 +7,7 @@ title: Restream
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate. Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.9) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#configuration) for more advanced configurations and features. Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.10) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration) for more advanced configurations and features.
:::note :::note
@ -156,7 +156,7 @@ See [this comment](https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-22
## Advanced Restream Configurations ## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below: The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
NOTE: The output will need to be passed with two curly braces `{{output}}` NOTE: The output will need to be passed with two curly braces `{{output}}`

View File

@ -95,8 +95,21 @@ Frigate supports multiple different detectors that work on different types of ha
- Runs best with tiny or small size models - Runs best with tiny or small size models
- Runs efficiently on low power hardware - Runs efficiently on low power hardware
**Synaptics**
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.
::: :::
### Synaptics
- **Synaptics** Default model is **mobilenet**
| Name | Synaptics SL1680 Inference Time |
| ---------------- | ------------------------------- |
| ssd mobilenet | ~ 25 ms |
| yolov5m | ~ 118 ms |
### Hailo-8 ### Hailo-8
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided. Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.

View File

@ -256,6 +256,37 @@ or add these options to your `docker run` command:
Next, you should configure [hardware object detection](/configuration/object_detectors#rockchip-platform) and [hardware video processing](/configuration/hardware_acceleration_video#rockchip-platform). Next, you should configure [hardware object detection](/configuration/object_detectors#rockchip-platform) and [hardware video processing](/configuration/hardware_acceleration_video#rockchip-platform).
### Synaptics
- SL1680
#### Setup
Follow Frigate's default installation instructions, but use a docker image with `-synaptics` suffix for example `ghcr.io/blakeblackshear/frigate:stable-synaptics`.
Next, you need to grant docker permissions to access your hardware:
- During the configuration process, you should run docker in privileged mode to avoid any errors due to insufficient permissions. To do so, add `privileged: true` to your `docker-compose.yml` file or the `--privileged` flag to your docker run command.
```yaml
devices:
- /dev/synap
- /dev/video0
- /dev/video1
```
or add these options to your `docker run` command:
```
--device /dev/synap \
--device /dev/video0 \
--device /dev/video1
```
#### Configuration
Next, you should configure [hardware object detection](/configuration/object_detectors#synaptics) and [hardware video processing](/configuration/hardware_acceleration_video#synaptics).
## Docker ## Docker
Running through Docker with Docker Compose is the recommended install method. Running through Docker with Docker Compose is the recommended install method.

View File

@ -13,7 +13,7 @@ Use of the bundled go2rtc is optional. You can still configure FFmpeg to connect
# Setup a go2rtc stream # Setup a go2rtc stream
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#module-streams), not just rtsp. First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#module-streams), not just rtsp.
:::tip :::tip
@ -49,8 +49,8 @@ After adding this to the config, restart Frigate and try to watch the live strea
- Check Video Codec: - Check Video Codec:
- If the camera stream works in go2rtc but not in your browser, the video codec might be unsupported. - If the camera stream works in go2rtc but not in your browser, the video codec might be unsupported.
- If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#codecs-madness) in go2rtc documentation. - If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#codecs-madness) in go2rtc documentation.
- If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view. - If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view.
```yaml ```yaml
go2rtc: go2rtc:
streams: streams:

View File

@ -5,14 +5,14 @@ import frigateHttpApiSidebar from "./docs/integrations/api/sidebar";
const sidebars: SidebarsConfig = { const sidebars: SidebarsConfig = {
docs: { docs: {
Frigate: [ Frigate: [
'frigate/index', "frigate/index",
'frigate/hardware', "frigate/hardware",
'frigate/planning_setup', "frigate/planning_setup",
'frigate/installation', "frigate/installation",
'frigate/updating', "frigate/updating",
'frigate/camera_setup', "frigate/camera_setup",
'frigate/video_pipeline', "frigate/video_pipeline",
'frigate/glossary', "frigate/glossary",
], ],
Guides: [ Guides: [
"guides/getting_started", "guides/getting_started",
@ -28,7 +28,7 @@ const sidebars: SidebarsConfig = {
{ {
type: "link", type: "link",
label: "Go2RTC Configuration Reference", label: "Go2RTC Configuration Reference",
href: "https://github.com/AlexxIT/go2rtc/tree/v1.9.9#configuration", href: "https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration",
} as PropSidebarItemLink, } as PropSidebarItemLink,
], ],
Detectors: [ Detectors: [
@ -119,11 +119,11 @@ const sidebars: SidebarsConfig = {
"configuration/metrics", "configuration/metrics",
"integrations/third_party_extensions", "integrations/third_party_extensions",
], ],
'Frigate+': [ "Frigate+": [
'plus/index', "plus/index",
'plus/annotating', "plus/annotating",
'plus/first_model', "plus/first_model",
'plus/faq', "plus/faq",
], ],
Troubleshooting: [ Troubleshooting: [
"troubleshooting/faqs", "troubleshooting/faqs",

View File

@ -29,6 +29,10 @@ class StationaryConfig(FrigateBaseModel):
default_factory=StationaryMaxFramesConfig, default_factory=StationaryMaxFramesConfig,
title="Max frames for stationary objects.", title="Max frames for stationary objects.",
) )
classifier: bool = Field(
default=True,
title="Enable visual classifier for determing if objects with jittery bounding boxes are stationary.",
)
class DetectConfig(FrigateBaseModel): class DetectConfig(FrigateBaseModel):

View File

@ -93,7 +93,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
if camera_config.review.genai.debug_save_thumbnails: if camera_config.review.genai.debug_save_thumbnails:
id = data["after"]["id"] id = data["after"]["id"]
Path(os.path.join(CLIPS_DIR, f"genai-requests/{id}")).mkdir( Path(os.path.join(CLIPS_DIR, "genai-requests", f"{id}")).mkdir(
parents=True, exist_ok=True parents=True, exist_ok=True
) )
shutil.copy( shutil.copy(
@ -124,6 +124,9 @@ class ReviewDescriptionProcessor(PostProcessorApi):
if topic == EmbeddingsRequestEnum.summarize_review.value: if topic == EmbeddingsRequestEnum.summarize_review.value:
start_ts = request_data["start_ts"] start_ts = request_data["start_ts"]
end_ts = request_data["end_ts"] end_ts = request_data["end_ts"]
logger.debug(
f"Found GenAI Review Summary request for {start_ts} to {end_ts}"
)
items: list[dict[str, Any]] = [ items: list[dict[str, Any]] = [
r["data"]["metadata"] r["data"]["metadata"]
for r in ( for r in (
@ -141,7 +144,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
if len(items) == 0: if len(items) == 0:
logger.debug("No review items with metadata found during time period") logger.debug("No review items with metadata found during time period")
return None return "No activity was found during this time."
important_items = list( important_items = list(
filter( filter(
@ -154,8 +157,16 @@ class ReviewDescriptionProcessor(PostProcessorApi):
if not important_items: if not important_items:
return "No concerns were found during this time period." return "No concerns were found during this time period."
if self.config.review.genai.debug_save_thumbnails:
Path(
os.path.join(CLIPS_DIR, "genai-requests", f"{start_ts}-{end_ts}")
).mkdir(parents=True, exist_ok=True)
return self.genai_client.generate_review_summary( return self.genai_client.generate_review_summary(
start_ts, end_ts, important_items start_ts,
end_ts,
important_items,
self.config.review.genai.debug_save_thumbnails,
) )
else: else:
return None return None

View File

@ -19,3 +19,4 @@ class ReviewMetadata(BaseModel):
default=None, default=None,
description="Other concerns highlighted by the user that are observed.", description="Other concerns highlighted by the user that are observed.",
) )
time: str | None = Field(default=None, description="Time of activity.")

View File

@ -0,0 +1,91 @@
import logging
import os
import numpy as np
from synap import Network
from synap.postprocessor import Detector
from synap.preprocessor import Preprocessor
from synap.types import Layout, Shape
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import (
BaseDetectorConfig,
InputTensorEnum,
ModelTypeEnum,
)
logger = logging.getLogger(__name__)
DETECTOR_KEY = "synaptics"
class SynapDetectorConfig(BaseDetectorConfig):
type: Literal[DETECTOR_KEY]
class SynapDetector(DetectionApi):
type_key = DETECTOR_KEY
def __init__(self, detector_config: SynapDetectorConfig):
try:
_, ext = os.path.splitext(detector_config.model.path)
if ext and ext != ".synap":
raise ValueError("Model path config for Synap1680 is wrong.")
synap_network = Network(detector_config.model.path)
logger.info(f"Synap NPU loaded model: {detector_config.model.path}")
except ValueError as ve:
logger.error(f"Config to Synap1680 was Failed: {ve}")
raise
except Exception as e:
logger.error(f"Failed to init Synap NPU: {e}")
raise
self.width = detector_config.model.width
self.height = detector_config.model.height
self.model_type = detector_config.model.model_type
self.network = synap_network
self.network_input_details = self.network.inputs[0]
self.input_tensor_layout = detector_config.model.input_tensor
# Create Inference Engine
self.preprocessor = Preprocessor()
self.detector = Detector(score_threshold=0.4, iou_threshold=0.4)
def detect_raw(self, tensor_input: np.ndarray):
# It has only been testing for pre-converted mobilenet80 .tflite -> .synap model currently
layout = Layout.nhwc # default layout
detections = np.zeros((20, 6), np.float32)
if self.input_tensor_layout == InputTensorEnum.nhwc:
layout = Layout.nhwc
postprocess_data = self.preprocessor.assign(
self.network.inputs, tensor_input, Shape(tensor_input.shape), layout
)
output_tensor_obj = self.network.predict()
output = self.detector.process(output_tensor_obj, postprocess_data)
if self.model_type == ModelTypeEnum.ssd:
for i, item in enumerate(output.items):
if i == 20:
break
bb = item.bounding_box
# Convert corner coordinates to normalized [0,1] range
x1 = bb.origin.x / self.width # Top-left X
y1 = bb.origin.y / self.height # Top-left Y
x2 = (bb.origin.x + bb.size.x) / self.width # Bottom-right X
y2 = (bb.origin.y + bb.size.y) / self.height # Bottom-right Y
detections[i] = [
item.class_index,
float(item.confidence),
y1,
x1,
y2,
x2,
]
else:
logger.error(f"Unsupported model type: {self.model_type}")
return detections

View File

@ -313,6 +313,7 @@ class EmbeddingMaintainer(threading.Thread):
if resp is not None: if resp is not None:
return resp return resp
logger.error(f"No processor handled the topic {topic}")
return None return None
except Exception as e: except Exception as e:
logger.error(f"Unable to handle embeddings request {e}", exc_info=True) logger.error(f"Unable to handle embeddings request {e}", exc_info=True)

View File

@ -73,7 +73,7 @@ Your task is to provide a clear, security-focused description of the scene that:
Facts come first, but identifying security risks is the primary goal. Facts come first, but identifying security risks is the primary goal.
When forming your description: When forming your description:
- Describe the time, people, and objects exactly as seen. Include any observable environmental changes (e.g., lighting changes triggered by activity). - Describe the people and objects exactly as seen. Include any observable environmental changes (e.g., lighting changes triggered by activity).
- Time of day should **increase suspicion only when paired with unusual or security-relevant behaviors**. Do not raise the threat level for common residential activities (e.g., residents walking pets, retrieving mail, gardening, playing with pets, supervising children) even at unusual hours, unless other suspicious indicators are present. - Time of day should **increase suspicion only when paired with unusual or security-relevant behaviors**. Do not raise the threat level for common residential activities (e.g., residents walking pets, retrieving mail, gardening, playing with pets, supervising children) even at unusual hours, unless other suspicious indicators are present.
- Focus on behaviors that are uncharacteristic of innocent activity: loitering without clear purpose, avoiding cameras, inspecting vehicles/doors, changing behavior when lights activate, scanning surroundings without an apparent benign reason. - Focus on behaviors that are uncharacteristic of innocent activity: loitering without clear purpose, avoiding cameras, inspecting vehicles/doors, changing behavior when lights activate, scanning surroundings without an apparent benign reason.
- **Benign context override**: If scanning or looking around is clearly part of an innocent activity (such as playing with a dog, gardening, supervising children, or watching for a pet), do not treat it as suspicious. - **Benign context override**: If scanning or looking around is clearly part of an innocent activity (such as playing with a dog, gardening, supervising children, or watching for a pet), do not treat it as suspicious.
@ -99,7 +99,7 @@ Sequence details:
**IMPORTANT:** **IMPORTANT:**
- Values must be plain strings, floats, or integers no nested objects, no extra commentary. - Values must be plain strings, floats, or integers no nested objects, no extra commentary.
{get_language_prompt()} {get_language_prompt()}
""" """
logger.debug( logger.debug(
f"Sending {len(thumbnails)} images to create review description on {review_data['camera']}" f"Sending {len(thumbnails)} images to create review description on {review_data['camera']}"
) )
@ -135,6 +135,7 @@ Sequence details:
if review_data["recognized_objects"]: if review_data["recognized_objects"]:
metadata.potential_threat_level = 0 metadata.potential_threat_level = 0
metadata.time = review_data["start"]
return metadata return metadata
except Exception as e: except Exception as e:
# rarely LLMs can fail to follow directions on output format # rarely LLMs can fail to follow directions on output format
@ -146,34 +147,75 @@ Sequence details:
return None return None
def generate_review_summary( def generate_review_summary(
self, start_ts: float, end_ts: float, segments: list[dict[str, Any]] self,
start_ts: float,
end_ts: float,
segments: list[dict[str, Any]],
debug_save: bool,
) -> str | None: ) -> str | None:
"""Generate a summary of review item descriptions over a period of time.""" """Generate a summary of review item descriptions over a period of time."""
time_range = f"{datetime.datetime.fromtimestamp(start_ts).strftime('%I:%M %p')} to {datetime.datetime.fromtimestamp(end_ts).strftime('%I:%M %p')}" time_range = f"{datetime.datetime.fromtimestamp(start_ts).strftime('%B %d, %Y at %I:%M %p')} to {datetime.datetime.fromtimestamp(end_ts).strftime('%B %d, %Y at %I:%M %p')}"
timeline_summary_prompt = f""" timeline_summary_prompt = f"""
You are a security officer. Time range: {time_range}. You are a security officer.
Time range: {time_range}.
Input: JSON list with "scene", "confidence", "potential_threat_level" (1-2), "other_concerns". Input: JSON list with "scene", "confidence", "potential_threat_level" (1-2), "other_concerns".
Write a report:
Security Summary - {time_range} Task: Write a concise, human-presentable security report in markdown format.
[One-sentence overview of activity]
[Chronological bullet list of events with timestamps if in scene]
[Final threat assessment]
Rules: Rules for the report:
- List events in order.
- Highlight potential_threat_level 1 with exact times. - Title & overview
- Note any of the additional concerns which are present. - Start with:
- Note unusual activity even if not threats. # Security Summary - {time_range}
- If no threats: "Final assessment: Only normal activity observed during this period." - Write a 1-2 sentence situational overview capturing the general pattern of the period.
- No commentary, questions, or recommendations.
- Output only the report. - Event details
""" - Present events in chronological order as a bullet list.
- **If multiple events occur within the same minute or overlapping time range, COMBINE them into a single bullet.**
- Summarize the distinct activities as sub-points under the shared timestamp.
- If no timestamp is given, preserve order but label as Time not specified.
- Use bold timestamps for clarity.
- Group bullets under subheadings when multiple events fall into the same category (e.g., Vehicle Activity, Porch Activity, Unusual Behavior).
- Threat levels
- Always show (threat level: X) for each event.
- If multiple events at the same time share the same threat level, only state it once.
- Final assessment
- End with a Final Assessment section.
- If all events are threat level 1 with no escalation:
Final assessment: Only normal residential activity observed during this period.
- If threat level 2+ events are present, clearly summarize them as Potential concerns requiring review.
- Conciseness
- Do not repeat benign clothing/appearance details unless they distinguish individuals.
- Summarize similar routine events instead of restating full scene descriptions.
"""
for item in segments: for item in segments:
timeline_summary_prompt += f"\n{item}" timeline_summary_prompt += f"\n{item}"
return self._send(timeline_summary_prompt, []) if debug_save:
with open(
os.path.join(
CLIPS_DIR, "genai-requests", f"{start_ts}-{end_ts}", "prompt.txt"
),
"w",
) as f:
f.write(timeline_summary_prompt)
response = self._send(timeline_summary_prompt, [])
if debug_save and response:
with open(
os.path.join(
CLIPS_DIR, "genai-requests", f"{start_ts}-{end_ts}", "response.txt"
),
"w",
) as f:
f.write(response)
return response
def generate_object_description( def generate_object_description(
self, self,

View File

@ -1,7 +1,7 @@
import logging import logging
import random import random
import string import string
from typing import Any, Sequence from typing import Any, Sequence, cast
import cv2 import cv2
import numpy as np import numpy as np
@ -17,6 +17,7 @@ from frigate.camera import PTZMetrics
from frigate.config import CameraConfig from frigate.config import CameraConfig
from frigate.ptz.autotrack import PtzMotionEstimator from frigate.ptz.autotrack import PtzMotionEstimator
from frigate.track import ObjectTracker from frigate.track import ObjectTracker
from frigate.track.stationary_classifier import StationaryMotionClassifier
from frigate.util.image import ( from frigate.util.image import (
SharedMemoryFrameManager, SharedMemoryFrameManager,
get_histogram, get_histogram,
@ -119,6 +120,7 @@ class NorfairTracker(ObjectTracker):
self.ptz_motion_estimator: PtzMotionEstimator | None = None self.ptz_motion_estimator: PtzMotionEstimator | None = None
self.camera_name = config.name self.camera_name = config.name
self.track_id_map: dict[str, str] = {} self.track_id_map: dict[str, str] = {}
self.stationary_classifier = StationaryMotionClassifier()
# Define tracker configurations for static camera # Define tracker configurations for static camera
self.object_type_configs = { self.object_type_configs = {
@ -321,23 +323,14 @@ class NorfairTracker(ObjectTracker):
# tracks the current position of the object based on the last N bounding boxes # tracks the current position of the object based on the last N bounding boxes
# returns False if the object has moved outside its previous position # returns False if the object has moved outside its previous position
def update_position(self, id: str, box: list[int], stationary: bool) -> bool: def update_position(
xmin, ymin, xmax, ymax = box self,
position = self.positions[id] id: str,
self.stationary_box_history[id].append(box) box: list[int],
stationary: bool,
if len(self.stationary_box_history[id]) > MAX_STATIONARY_HISTORY: yuv_frame: np.ndarray | None,
self.stationary_box_history[id] = self.stationary_box_history[id][ ) -> bool:
-MAX_STATIONARY_HISTORY: def reset_position(xmin: int, ymin: int, xmax: int, ymax: int) -> None:
]
avg_iou = intersection_over_union(
box, average_boxes(self.stationary_box_history[id])
)
# object has minimal or zero iou
# assume object is active
if avg_iou < THRESHOLD_KNOWN_ACTIVE_IOU:
self.positions[id] = { self.positions[id] = {
"xmins": [xmin], "xmins": [xmin],
"ymins": [ymin], "ymins": [ymin],
@ -348,13 +341,48 @@ class NorfairTracker(ObjectTracker):
"xmax": xmax, "xmax": xmax,
"ymax": ymax, "ymax": ymax,
} }
return False
xmin, ymin, xmax, ymax = box
position = self.positions[id]
self.stationary_box_history[id].append(box)
if len(self.stationary_box_history[id]) > MAX_STATIONARY_HISTORY:
self.stationary_box_history[id] = self.stationary_box_history[id][
-MAX_STATIONARY_HISTORY:
]
avg_box = average_boxes(self.stationary_box_history[id])
avg_iou = intersection_over_union(box, avg_box)
median_box = median_of_boxes(self.stationary_box_history[id])
# Establish anchor early when stationary and stable
if stationary and yuv_frame is not None:
history = self.stationary_box_history[id]
if id not in self.stationary_classifier.anchor_crops and len(history) >= 5:
stability_iou = intersection_over_union(avg_box, median_box)
if stability_iou >= 0.7:
self.stationary_classifier.ensure_anchor(
id, yuv_frame, cast(tuple[int, int, int, int], median_box)
)
# object has minimal or zero iou
# assume object is active
if avg_iou < THRESHOLD_KNOWN_ACTIVE_IOU:
if stationary and yuv_frame is not None:
if not self.stationary_classifier.evaluate(
id, yuv_frame, cast(tuple[int, int, int, int], tuple(box))
):
reset_position(xmin, ymin, xmax, ymax)
return False
else:
reset_position(xmin, ymin, xmax, ymax)
return False
threshold = ( threshold = (
THRESHOLD_STATIONARY_CHECK_IOU if stationary else THRESHOLD_ACTIVE_CHECK_IOU THRESHOLD_STATIONARY_CHECK_IOU if stationary else THRESHOLD_ACTIVE_CHECK_IOU
) )
# object has iou below threshold, check median to reduce outliers # object has iou below threshold, check median and optionally crop similarity
if avg_iou < threshold: if avg_iou < threshold:
median_iou = intersection_over_union( median_iou = intersection_over_union(
( (
@ -363,27 +391,26 @@ class NorfairTracker(ObjectTracker):
position["xmax"], position["xmax"],
position["ymax"], position["ymax"],
), ),
median_of_boxes(self.stationary_box_history[id]), median_box,
) )
# if the median iou drops below the threshold # if the median iou drops below the threshold
# assume object is no longer stationary # assume object is no longer stationary
if median_iou < threshold: if median_iou < threshold:
self.positions[id] = { # If we have a yuv_frame to check before flipping to active, check with classifier if we have YUV frame
"xmins": [xmin], if stationary and yuv_frame is not None:
"ymins": [ymin], if not self.stationary_classifier.evaluate(
"xmaxs": [xmax], id, yuv_frame, cast(tuple[int, int, int, int], tuple(box))
"ymaxs": [ymax], ):
"xmin": xmin, reset_position(xmin, ymin, xmax, ymax)
"ymin": ymin, return False
"xmax": xmax, else:
"ymax": ymax, reset_position(xmin, ymin, xmax, ymax)
} return False
return False
# if there are more than 5 and less than 10 entries for the position, add the bounding box # if there are more than 5 and less than 10 entries for the position, add the bounding box
# and recompute the position box # and recompute the position box
if 5 <= len(position["xmins"]) < 10: if len(position["xmins"]) < 10:
position["xmins"].append(xmin) position["xmins"].append(xmin)
position["ymins"].append(ymin) position["ymins"].append(ymin)
position["xmaxs"].append(xmax) position["xmaxs"].append(xmax)
@ -416,7 +443,12 @@ class NorfairTracker(ObjectTracker):
return False return False
def update(self, track_id: str, obj: dict[str, Any]) -> None: def update(
self,
track_id: str,
obj: dict[str, Any],
yuv_frame: np.ndarray | None,
) -> None:
id = self.track_id_map[track_id] id = self.track_id_map[track_id]
self.disappeared[id] = 0 self.disappeared[id] = 0
stationary = ( stationary = (
@ -424,7 +456,7 @@ class NorfairTracker(ObjectTracker):
>= self.detect_config.stationary.threshold >= self.detect_config.stationary.threshold
) )
# update the motionless count if the object has not moved to a new position # update the motionless count if the object has not moved to a new position
if self.update_position(id, obj["box"], stationary): if self.update_position(id, obj["box"], stationary, yuv_frame):
self.tracked_objects[id]["motionless_count"] += 1 self.tracked_objects[id]["motionless_count"] += 1
if self.is_expired(id): if self.is_expired(id):
self.deregister(id, track_id) self.deregister(id, track_id)
@ -440,6 +472,7 @@ class NorfairTracker(ObjectTracker):
self.tracked_objects[id]["position_changes"] += 1 self.tracked_objects[id]["position_changes"] += 1
self.tracked_objects[id]["motionless_count"] = 0 self.tracked_objects[id]["motionless_count"] = 0
self.stationary_box_history[id] = [] self.stationary_box_history[id] = []
self.stationary_classifier.on_active(id)
self.tracked_objects[id].update(obj) self.tracked_objects[id].update(obj)
@ -467,6 +500,15 @@ class NorfairTracker(ObjectTracker):
) -> None: ) -> None:
# Group detections by object type # Group detections by object type
detections_by_type: dict[str, list[Detection]] = {} detections_by_type: dict[str, list[Detection]] = {}
yuv_frame: np.ndarray | None = None
if self.ptz_metrics.autotracker_enabled.value or (
self.detect_config.stationary.classifier
and any(obj[0] == "car" for obj in detections)
):
yuv_frame = self.frame_manager.get(
frame_name, self.camera_config.frame_shape_yuv
)
for obj in detections: for obj in detections:
label = obj[0] label = obj[0]
if label not in detections_by_type: if label not in detections_by_type:
@ -481,9 +523,6 @@ class NorfairTracker(ObjectTracker):
embedding = None embedding = None
if self.ptz_metrics.autotracker_enabled.value: if self.ptz_metrics.autotracker_enabled.value:
yuv_frame = self.frame_manager.get(
frame_name, self.camera_config.frame_shape_yuv
)
embedding = get_histogram( embedding = get_histogram(
yuv_frame, obj[2][0], obj[2][1], obj[2][2], obj[2][3] yuv_frame, obj[2][0], obj[2][1], obj[2][2], obj[2][3]
) )
@ -575,7 +614,11 @@ class NorfairTracker(ObjectTracker):
self.tracked_objects[id]["estimate"] = new_obj["estimate"] self.tracked_objects[id]["estimate"] = new_obj["estimate"]
# else update it # else update it
else: else:
self.update(str(t.global_id), new_obj) self.update(
str(t.global_id),
new_obj,
yuv_frame if new_obj["label"] == "car" else None,
)
# clear expired tracks # clear expired tracks
expired_ids = [k for k in self.track_id_map.keys() if k not in active_ids] expired_ids = [k for k in self.track_id_map.keys() if k not in active_ids]

View File

@ -0,0 +1,202 @@
"""Tools for determining if an object is stationary."""
import logging
from typing import Any, cast
import cv2
import numpy as np
from scipy.ndimage import gaussian_filter
logger = logging.getLogger(__name__)
THRESHOLD_KNOWN_ACTIVE_IOU = 0.2
THRESHOLD_STATIONARY_CHECK_IOU = 0.6
THRESHOLD_ACTIVE_CHECK_IOU = 0.9
MAX_STATIONARY_HISTORY = 10
class StationaryMotionClassifier:
"""Fallback classifier to prevent false flips from stationary to active.
Uses appearance consistency on a fixed spatial region (historical median box)
to detect actual movement, ignoring bounding box detection variations.
"""
CROP_SIZE = 96
NCC_KEEP_THRESHOLD = 0.90 # High correlation = keep stationary
NCC_ACTIVE_THRESHOLD = 0.85 # Low correlation = consider active
SHIFT_KEEP_THRESHOLD = 0.02 # Small shift = keep stationary
SHIFT_ACTIVE_THRESHOLD = 0.04 # Large shift = consider active
DRIFT_ACTIVE_THRESHOLD = 0.12 # Cumulative drift over 5 frames
CHANGED_FRAMES_TO_FLIP = 2
def __init__(self) -> None:
self.anchor_crops: dict[str, np.ndarray] = {}
self.anchor_boxes: dict[str, tuple[int, int, int, int]] = {}
self.changed_counts: dict[str, int] = {}
self.shift_histories: dict[str, list[float]] = {}
# Pre-compute Hanning window for phase correlation
hann = np.hanning(self.CROP_SIZE).astype(np.float64)
self._hann2d = np.outer(hann, hann)
def reset(self, id: str) -> None:
logger.debug("StationaryMotionClassifier.reset: id=%s", id)
if id in self.anchor_crops:
del self.anchor_crops[id]
if id in self.anchor_boxes:
del self.anchor_boxes[id]
self.changed_counts[id] = 0
self.shift_histories[id] = []
def _extract_y_crop(
self, yuv_frame: np.ndarray, box: tuple[int, int, int, int]
) -> np.ndarray:
"""Extract and normalize Y-plane crop from bounding box."""
y_height = yuv_frame.shape[0] // 3 * 2
width = yuv_frame.shape[1]
x1 = max(0, min(width - 1, box[0]))
y1 = max(0, min(y_height - 1, box[1]))
x2 = max(0, min(width - 1, box[2]))
y2 = max(0, min(y_height - 1, box[3]))
if x2 <= x1:
x2 = min(width - 1, x1 + 1)
if y2 <= y1:
y2 = min(y_height - 1, y1 + 1)
# Extract Y-plane crop, resize, and blur
y_plane = yuv_frame[0:y_height, 0:width]
crop = y_plane[y1:y2, x1:x2]
crop_resized = cv2.resize(
crop, (self.CROP_SIZE, self.CROP_SIZE), interpolation=cv2.INTER_AREA
)
result = cast(np.ndarray[Any, Any], gaussian_filter(crop_resized, sigma=0.5))
logger.debug(
"_extract_y_crop: box=%s clamped=(%d,%d,%d,%d) crop_shape=%s",
box,
x1,
y1,
x2,
y2,
crop.shape if "crop" in locals() else None,
)
return result
def ensure_anchor(
self, id: str, yuv_frame: np.ndarray, median_box: tuple[int, int, int, int]
) -> None:
"""Initialize anchor crop from stable median box when object becomes stationary."""
if id not in self.anchor_crops:
self.anchor_boxes[id] = median_box
self.anchor_crops[id] = self._extract_y_crop(yuv_frame, median_box)
self.changed_counts[id] = 0
self.shift_histories[id] = []
logger.debug(
"ensure_anchor: initialized id=%s median_box=%s crop_shape=%s",
id,
median_box,
self.anchor_crops[id].shape,
)
def on_active(self, id: str) -> None:
"""Reset state when object becomes active to allow re-anchoring."""
logger.debug("on_active: id=%s became active; resetting state", id)
self.reset(id)
def evaluate(
self, id: str, yuv_frame: np.ndarray, current_box: tuple[int, int, int, int]
) -> bool:
"""Return True to keep stationary, False to flip to active.
Compares the same spatial region (historical median box) across frames
to detect actual movement, ignoring bounding box variations.
"""
if id not in self.anchor_crops or id not in self.anchor_boxes:
logger.debug("evaluate: id=%s has no anchor; default keep stationary", id)
return True
# Compare same spatial region across frames
anchor_box = self.anchor_boxes[id]
anchor_crop = self.anchor_crops[id]
curr_crop = self._extract_y_crop(yuv_frame, anchor_box)
# Compute appearance and motion metrics
ncc = cv2.matchTemplate(curr_crop, anchor_crop, cv2.TM_CCOEFF_NORMED)[0, 0]
a64 = anchor_crop.astype(np.float64) * self._hann2d
c64 = curr_crop.astype(np.float64) * self._hann2d
(shift_x, shift_y), _ = cv2.phaseCorrelate(a64, c64)
shift_norm = float(np.hypot(shift_x, shift_y)) / float(self.CROP_SIZE)
logger.debug(
"evaluate: id=%s metrics ncc=%.4f shift_norm=%.4f (shift_x=%.3f, shift_y=%.3f)",
id,
float(ncc),
shift_norm,
float(shift_x),
float(shift_y),
)
# Update rolling shift history
history = self.shift_histories.get(id, [])
history.append(shift_norm)
if len(history) > 5:
history = history[-5:]
self.shift_histories[id] = history
drift_sum = float(sum(history))
logger.debug(
"evaluate: id=%s history_len=%d last_shift=%.4f drift_sum=%.4f",
id,
len(history),
history[-1] if history else -1.0,
drift_sum,
)
# Early exit for clear stationary case
if ncc >= self.NCC_KEEP_THRESHOLD and shift_norm < self.SHIFT_KEEP_THRESHOLD:
self.changed_counts[id] = 0
logger.debug(
"evaluate: id=%s early-stationary keep=True (ncc>=%.2f and shift<%.2f)",
id,
self.NCC_KEEP_THRESHOLD,
self.SHIFT_KEEP_THRESHOLD,
)
return True
# Check for movement indicators
movement_detected = (
ncc < self.NCC_ACTIVE_THRESHOLD
or shift_norm >= self.SHIFT_ACTIVE_THRESHOLD
or drift_sum >= self.DRIFT_ACTIVE_THRESHOLD
)
if movement_detected:
cnt = self.changed_counts.get(id, 0) + 1
self.changed_counts[id] = cnt
if (
cnt >= self.CHANGED_FRAMES_TO_FLIP
or drift_sum >= self.DRIFT_ACTIVE_THRESHOLD
):
logger.debug(
"evaluate: id=%s flip_to_active=True cnt=%d drift_sum=%.4f thresholds(changed>=%d drift>=%.2f)",
id,
cnt,
drift_sum,
self.CHANGED_FRAMES_TO_FLIP,
self.DRIFT_ACTIVE_THRESHOLD,
)
return False
logger.debug(
"evaluate: id=%s movement_detected cnt=%d keep_until_cnt>=%d",
id,
cnt,
self.CHANGED_FRAMES_TO_FLIP,
)
else:
self.changed_counts[id] = 0
logger.debug("evaluate: id=%s no_movement keep=True", id)
return True