Compare commits

...

24 Commits

Author SHA1 Message Date
Nicolas Mowen
5ff7a47ba9
Unify list of objects under dedicated section (#20684)
* Unify list of objects under dedicated section

* Use helper fuction
2025-10-26 16:37:57 -05:00
Josh Hawkins
5715ed62ad
More detail pane tweaks (#20681)
* More detail pane tweaks

* remove unneeded check

* add ability to submit frames to frigate+

* rename object lifecycle to tracking details

* add object mask creation to lifecycle item menu

* change tracking details icon
2025-10-26 13:12:20 -05:00
Nicolas Mowen
43706eb48d
Add button to view exports when exported (#20682) 2025-10-26 13:11:48 -05:00
Nicolas Mowen
190925375b
Classification fixes (#20677)
* Don't run classification on stationary objects and set a maximum number of classifications

* Fix layout of classification selection
2025-10-26 08:41:18 -05:00
Nicolas Mowen
094a0a6e05
Add ability to change source of images for review descriptions (#20676)
* Add ability to change source of images for review descriptions

* Undo
2025-10-26 08:40:38 -05:00
Josh Hawkins
840d567d22
UI tweaks (#20675)
* spacing tweaks and add link to explore for plate

* clear selected objects when changing cameras

* plate link and spacing in object lifecycle

* set tabindex to prevent tooltip from showing on reopen

* show month and day in object lifecycle timestamp
2025-10-26 07:27:07 -05:00
Josh Hawkins
2c480b9a89
Fix History layout for mobile portrait cameras (#20669) 2025-10-25 19:44:06 -05:00
Nicolas Mowen
1fb21a4dac
Classification improvements (#20665)
* Don't classify objects that are ended

* Use weighted scoring for object classification

* Implement state verification
2025-10-25 16:15:49 -06:00
Josh Hawkins
63042b9c08
Review stream tweaks (#20662)
* tweak api to fetch multiple timelines

* support multiple selected objects in context

* rework context provider

* use toggle in detail stream

* use toggle in menu

* plot multiple object tracks

* verified icon, recognized plate, and clicking tweaks

* add plate to object lifecycle

* close menu before opening frigate+ dialog

* clean up

* normal text case for tooltip

* capitalization

* use flexbox for recording view
2025-10-25 16:15:36 -06:00
Nicolas Mowen
0a6b9f98ed
Various fixes (#20666)
* Remove nvidia pyindex

* Improve prompt
2025-10-25 16:40:04 -05:00
Blake Blackshear
32875fb4cc Merge remote-tracking branch 'origin/master' into dev 2025-10-25 11:16:09 +00:00
Nicolas Mowen
c5fe354552
Improve Reolink Camera Documentation (#20605)
* Improve Reolink Camera Documentation

* Update Reolink configuration link in live.md
2025-10-21 16:20:41 -06:00
Josh Hawkins
5dc8a85f2f
Update Azure OpenAI genai docs (#20549)
* Update azure openai genai docs

* tweak url
2025-10-18 06:44:26 -06:00
Nicolas Mowen
0302db1c43
Fix model exports (#20540) 2025-10-17 07:16:30 -05:00
Nicolas Mowen
a4764563a5
Fix YOLOv9 export script (#20514) 2025-10-16 07:56:37 -05:00
Josh Hawkins
942a61ddfb
version bump in docs (#20501) 2025-10-15 05:53:31 -06:00
Nicolas Mowen
4d582062fb
Ensure that a user must provide an image in an expected location (#20491)
* Ensure that a user must provide an image in an expected location

* Use const
2025-10-14 16:29:20 -05:00
Nicolas Mowen
e0a8445bac
Improve rf-detr export (#20485) 2025-10-14 08:32:44 -05:00
Josh Hawkins
2a271c0f5e
Update GenAI docs for Gemini model deprecation (#20462) 2025-10-13 10:00:21 -06:00
Nicolas Mowen
925bf78811
Update review topic description (#20445) 2025-10-12 07:28:08 -05:00
Sean Kelly
59102794e8
Add keyboard shortcut for switching to previous label (#20426)
* Add keyboard shortcut for switching to previous label

* Update docs/docs/plus/annotating.md

Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>

---------

Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>
2025-10-11 10:43:41 -06:00
mpking828
20e5e3bdc0
Update camera_specific.md to fix 2 way audio example for Reolink (#20343)
Update camera_specific.md to fix 2 way audio example for Reolink
2025-10-03 08:49:51 -06:00
AmirHossein_Omidi
b94ebda9e5
Update license_plate_recognition.md (#20306)
* Update license_plate_recognition.md

Add PaddleOCR description for license plate recognition in Frigate docs

* Update docs/docs/configuration/license_plate_recognition.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/docs/configuration/license_plate_recognition.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-10-01 08:18:47 -05:00
Nicolas Mowen
8cdaef307a
Update face rec docs (#20256)
* Update face rec docs

* clarify

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-09-28 11:31:59 -05:00
47 changed files with 1233 additions and 651 deletions

View File

@ -1,2 +1 @@
scikit-build == 0.18.*
nvidia-pyindex

View File

@ -164,13 +164,35 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues
Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels.
The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras.
<details>
<summary>Example Config</summary>
:::tip
Reolink's latest cameras support two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
NOTE: The RTSP stream can not be prefixed with `ffmpeg:`, as go2rtc needs to handle the stream to support two way audio.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
:::
```yaml
go2rtc:
streams:
# example for connecting to a standard Reolink camera
your_reolink_camera:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
your_reolink_camera_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
# example for connectin to a Reolink camera that supports two way talk
your_reolink_camera_twt:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- "rtsp://username:password@reolink_ip/Preview_01_sub
your_reolink_camera_twt_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
- "rtsp://username:password@reolink_ip/Preview_01_sub
# example for connecting to a Reolink NVR
your_reolink_camera_via_nvr:
- "ffmpeg:http://reolink_nvr_ip/flv?port=1935&app=bcs&stream=channel3_main.bcs&user=username&password=password" # channel numbers are 0-15
- "ffmpeg:your_reolink_camera_via_nvr#audio=aac"
@ -201,22 +223,7 @@ cameras:
roles:
- detect
```
#### Reolink Doorbell
The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
```yaml
go2rtc:
streams:
your_reolink_doorbell:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- rtsp://reolink_ip/Preview_01_sub
your_reolink_doorbell_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
```
</details>
### Unifi Protect Cameras

View File

@ -161,6 +161,8 @@ Start with the [Usage](#usage) section and re-read the [Model Requirements](#mod
Accuracy is definitely a going to be improved with higher quality cameras / streams. It is important to look at the DORI (Detection Observation Recognition Identification) range of your camera, if that specification is posted. This specification explains the distance from the camera that a person can be detected, observed, recognized, and identified. The identification range is the most relevant here, and the distance listed by the camera is the furthest that face recognition will realistically work.
Some users have also noted that setting the stream in camera firmware to a constant bit rate (CBR) leads to better image clarity than with a variable bit rate (VBR).
### Why can't I bulk upload photos?
It is important to methodically add photos to the library, bulk importing photos (especially from a general photo library) will lead to over-fitting in that particular scenario and hurt recognition performance.

View File

@ -17,18 +17,17 @@ To use Generative AI, you must define a single provider at the global level of y
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
model: gemini-2.0-flash
cameras:
front_camera:
objects:
genai:
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
indoor_camera:
objects:
genai:
@ -80,7 +79,7 @@ Google Gemini has a free tier allowing [15 queries per minute](https://ai.google
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
@ -97,7 +96,7 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
model: gemini-2.0-flash
```
:::note
@ -112,7 +111,7 @@ OpenAI does not have a free tier for their API. With the release of gpt-4o, pric
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
@ -139,18 +138,19 @@ Microsoft offers several vision models through Azure OpenAI. A subscription is r
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
### Configuration
```yaml
genai:
provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
@ -196,10 +196,10 @@ genai:
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.

View File

@ -39,6 +39,26 @@ Each installation and even camera can have different parameters for what is cons
- Brief movement with legitimate items (bags, packages, tools, equipment) in appropriate zones is routine.
```
### Image Source
By default, review summaries use preview images (cached preview frames) which have a lower resolution but use fewer tokens per image. For better image quality and more detailed analysis, you can configure Frigate to extract frames directly from recordings at a higher resolution:
```yaml
review:
genai:
enabled: true
image_source: recordings # Options: "preview" (default) or "recordings"
```
When using `recordings`, frames are extracted at 480p resolution (480px height), providing better detail for the LLM while being mindful of context window size. This is particularly useful for scenarios where fine details matter, such as identifying license plates, reading text, or analyzing distant objects. Note that using recordings will:
- Provide higher quality images to the LLM (480p vs 180p preview images)
- Use more tokens per image (~200-300 tokens vs ~100 tokens for preview)
- Result in fewer frames being sent to stay within context limits (typically 6-12 frames vs 8-20 frames)
- Require that recordings are enabled for the camera
If recordings are not available for a given time period, the system will automatically fall back to using preview frames.
### Additional Concerns
Along with the concern of suspicious activity or immediate threat, you may have concerns such as animals in your garden or a gate being left open. These concerns can be configured so that the review summaries will make note of them if the activity requires additional review. For example:

View File

@ -30,8 +30,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
## Configuration
License plate recognition is disabled by default. Enable it in your config file:

View File

@ -174,7 +174,7 @@ For devices that support two way talk, Frigate can be configured to use the feat
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
- For the Home Assistant Frigate card, [follow the docs](http://card.camera/#/usage/2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-cameras)
As a starting point to check compatibility for your camera, view the list of cameras supported for two-way talk on the [go2rtc repository](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#two-way-audio). For cameras in the category `ONVIF Profile T`, you can use the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/)'s FeatureList to check for the presence of `AudioOutput`. A camera that supports `ONVIF Profile T` _usually_ supports this, but due to inconsistent support, a camera that explicitly lists this feature may still not work. If no entry for your camera exists on the database, it is recommended not to buy it or to consult with the manufacturer's support on the feature availability.

View File

@ -1455,7 +1455,7 @@ COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /dfine
RUN git clone https://github.com/Peterande/D-FINE.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx onnxruntime onnxsim
RUN uv pip install --system onnx onnxruntime onnxsim onnxscript
# Create output directory and download checkpoint
RUN mkdir -p output
ARG MODEL_SIZE
@ -1479,9 +1479,9 @@ FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /rfdetr
RUN uv pip install --system rfdetr onnx onnxruntime onnxsim onnx-graphsurgeon
RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnxscript
ARG MODEL_SIZE
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export()"
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)"
FROM scratch
ARG MODEL_SIZE
COPY --from=build /rfdetr/output/inference_model.onnx /rfdetr-${MODEL_SIZE}.onnx
@ -1529,7 +1529,7 @@ COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript
ARG MODEL_SIZE
ARG IMG_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt

View File

@ -429,6 +429,10 @@ review:
alerts: True
# Optional: Enable GenAI review summaries for detections (default: shown below)
detections: False
# Optional: Image source for GenAI (default: preview)
# Options: "preview" (uses cached preview frames at 180p) or "recordings" (extracts frames from recordings at 480p)
# Using "recordings" provides better image quality but uses ~2-3x more tokens per image (~200-300 vs ~100 tokens)
image_source: preview
# Optional: Additional concerns that the GenAI should make note of (default: None)
additional_concerns:
- Animals in the garden

View File

@ -5,7 +5,7 @@ title: Updating
# Updating Frigate
The current stable version of Frigate is **0.16.1**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.1).
The current stable version of Frigate is **0.16.2**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.2).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
@ -33,21 +33,21 @@ If youre running Frigate via Docker (recommended method), follow these steps:
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.1` instead of `0.15.2`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.2` instead of `0.15.2`). For example:
```yaml
services:
frigate:
image: ghcr.io/blakeblackshear/frigate:0.16.1
image: ghcr.io/blakeblackshear/frigate:0.16.2
```
- Then pull the image:
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
```
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you dont need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
- If using `docker run`:
- Pull the image with the appropriate tag (e.g., `0.16.1`, `0.16.1-tensorrt`, or `stable`):
- Pull the image with the appropriate tag (e.g., `0.16.2`, `0.16.2-tensorrt`, or `stable`):
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
```
3. **Start the Container**:

View File

@ -161,7 +161,14 @@ Message published for updates to tracked object metadata, for example:
### `frigate/reviews`
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated. When additional objects are detected or when a zone change occurs, it will publish a, `update` message with the same id. When the review activity has ended a final `end` message is published.
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
An `update` with the same ID will be published when:
- The severity changes from `detection` to `alert`
- Additional objects are detected
- An object is recognized via face, lpr, etc.
When the review activity has ended a final `end` message is published.
```json
{

View File

@ -42,6 +42,7 @@ Misidentified objects should have a correct label added. For example, if a perso
| `w` | Add box |
| `d` | Toggle difficult |
| `s` | Switch to the next label |
| `Shift + s` | Switch to the previous label |
| `tab` | Select next largest box |
| `del` | Delete current box |
| `esc` | Deselect/Cancel |

View File

@ -696,7 +696,11 @@ def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = N
clauses.append((Timeline.camera == camera))
if source_id:
clauses.append((Timeline.source_id == source_id))
source_ids = [sid.strip() for sid in source_id.split(",")]
if len(source_ids) == 1:
clauses.append((Timeline.source_id == source_ids[0]))
else:
clauses.append((Timeline.source_id.in_(source_ids)))
if len(clauses) == 0:
clauses.append((True))

View File

@ -9,6 +9,7 @@ from typing import List
import psutil
from fastapi import APIRouter, Depends, Request
from fastapi.responses import JSONResponse
from pathvalidate import sanitize_filepath
from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
@ -26,7 +27,7 @@ from frigate.api.defs.response.export_response import (
)
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.const import EXPORT_DIR
from frigate.const import CLIPS_DIR, EXPORT_DIR
from frigate.models import Export, Previews, Recordings
from frigate.record.export import (
PlaybackFactorEnum,
@ -88,7 +89,14 @@ def export_recording(
playback_factor = body.playback
playback_source = body.source
friendly_name = body.name
existing_image = body.image_path
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
# Ensure that existing_image is a valid path
if existing_image and not existing_image.startswith(CLIPS_DIR):
return JSONResponse(
content=({"success": False, "message": "Invalid image path"}),
status_code=400,
)
if playback_source == "recordings":
recordings_count = (

View File

@ -1,10 +1,18 @@
from enum import Enum
from typing import Optional, Union
from pydantic import Field, field_validator
from ..base import FrigateBaseModel
__all__ = ["ReviewConfig", "DetectionsConfig", "AlertsConfig"]
__all__ = ["ReviewConfig", "DetectionsConfig", "AlertsConfig", "ImageSourceEnum"]
class ImageSourceEnum(str, Enum):
"""Image source options for GenAI Review."""
preview = "preview"
recordings = "recordings"
DEFAULT_ALERT_OBJECTS = ["person", "car"]
@ -77,6 +85,10 @@ class GenAIReviewConfig(FrigateBaseModel):
)
alerts: bool = Field(default=True, title="Enable GenAI for alerts.")
detections: bool = Field(default=False, title="Enable GenAI for detections.")
image_source: ImageSourceEnum = Field(
default=ImageSourceEnum.preview,
title="Image source for review descriptions.",
)
additional_concerns: list[str] = Field(
default=[],
title="Additional concerns that GenAI should make note of on this camera.",

View File

@ -3,6 +3,7 @@
import copy
import datetime
import logging
import math
import os
import shutil
import threading
@ -10,16 +11,18 @@ from pathlib import Path
from typing import Any
import cv2
from peewee import DoesNotExist
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.config.camera.review import GenAIReviewConfig
from frigate.config.camera.review import GenAIReviewConfig, ImageSourceEnum
from frigate.const import CACHE_DIR, CLIPS_DIR, UPDATE_REVIEW_DESCRIPTION
from frigate.data_processing.types import PostProcessDataEnum
from frigate.genai import GenAIClient
from frigate.models import ReviewSegment
from frigate.models import Recordings, ReviewSegment
from frigate.util.builtin import EventsPerSecond, InferenceSpeed
from frigate.util.image import get_image_from_recording
from ..post.api import PostProcessorApi
from ..types import DataProcessorMetrics
@ -43,20 +46,35 @@ class ReviewDescriptionProcessor(PostProcessorApi):
self.review_descs_dps = EventsPerSecond()
self.review_descs_dps.start()
def calculate_frame_count(self) -> int:
"""Calculate optimal number of frames based on context size."""
# With our preview images (height of 180px) each image should be ~100 tokens per image
# We want to be conservative to not have too long of query times with too many images
def calculate_frame_count(
self, image_source: ImageSourceEnum = ImageSourceEnum.preview
) -> int:
"""Calculate optimal number of frames based on context size and image source."""
context_size = self.genai_client.get_context_size()
if context_size > 10000:
return 20
elif context_size > 6000:
return 16
elif context_size > 4000:
return 12
if image_source == ImageSourceEnum.recordings:
# With recordings at 480p resolution (480px height), each image uses ~200-300 tokens
# This is ~2-3x more than preview images, so we reduce frame count accordingly
# to avoid exceeding context limits and maintain reasonable inference times
if context_size > 10000:
return 12
elif context_size > 6000:
return 10
elif context_size > 4000:
return 8
else:
return 6
else:
return 8
# With preview images (180px height), each image uses ~100 tokens
# We can send more frames since they're lower resolution
if context_size > 10000:
return 20
elif context_size > 6000:
return 16
elif context_size > 4000:
return 12
else:
return 8
def process_data(self, data, data_type):
self.metrics.review_desc_dps.value = self.review_descs_dps.eps()
@ -88,36 +106,50 @@ class ReviewDescriptionProcessor(PostProcessorApi):
):
return
frames = self.get_cache_frames(
camera, final_data["start_time"], final_data["end_time"]
)
image_source = camera_config.review.genai.image_source
if not frames:
frames = [final_data["thumb_path"]]
thumbs = []
for idx, thumb_path in enumerate(frames):
thumb_data = cv2.imread(thumb_path)
ret, jpg = cv2.imencode(
".jpg", thumb_data, [int(cv2.IMWRITE_JPEG_QUALITY), 100]
if image_source == ImageSourceEnum.recordings:
thumbs = self.get_recording_frames(
camera,
final_data["start_time"],
final_data["end_time"],
height=480, # Use 480p for good balance between quality and token usage
)
if ret:
thumbs.append(jpg.tobytes())
if camera_config.review.genai.debug_save_thumbnails:
id = data["after"]["id"]
Path(os.path.join(CLIPS_DIR, "genai-requests", f"{id}")).mkdir(
if not thumbs:
# Fallback to preview frames if no recordings available
logger.warning(
f"No recording frames found for {camera}, falling back to preview frames"
)
thumbs = self.get_preview_frames_as_bytes(
camera,
final_data["start_time"],
final_data["end_time"],
final_data["thumb_path"],
id,
camera_config.review.genai.debug_save_thumbnails,
)
elif camera_config.review.genai.debug_save_thumbnails:
# Save debug thumbnails for recordings
Path(os.path.join(CLIPS_DIR, "genai-requests", id)).mkdir(
parents=True, exist_ok=True
)
shutil.copy(
thumb_path,
os.path.join(
CLIPS_DIR,
f"genai-requests/{id}/{idx}.webp",
),
)
for idx, frame_bytes in enumerate(thumbs):
with open(
os.path.join(CLIPS_DIR, f"genai-requests/{id}/{idx}.jpg"),
"wb",
) as f:
f.write(frame_bytes)
else:
# Use preview frames
thumbs = self.get_preview_frames_as_bytes(
camera,
final_data["start_time"],
final_data["end_time"],
final_data["thumb_path"],
id,
camera_config.review.genai.debug_save_thumbnails,
)
# kickoff analysis
self.review_descs_dps.update()
@ -231,6 +263,122 @@ class ReviewDescriptionProcessor(PostProcessorApi):
return selected_frames
def get_recording_frames(
self,
camera: str,
start_time: float,
end_time: float,
height: int = 480,
) -> list[bytes]:
"""Get frames from recordings at specified timestamps."""
duration = end_time - start_time
desired_frame_count = self.calculate_frame_count(ImageSourceEnum.recordings)
# Calculate evenly spaced timestamps throughout the duration
if desired_frame_count == 1:
timestamps = [start_time + duration / 2]
else:
step = duration / (desired_frame_count - 1)
timestamps = [start_time + (i * step) for i in range(desired_frame_count)]
def extract_frame_from_recording(ts: float) -> bytes | None:
"""Extract a single frame from recording at given timestamp."""
try:
recording = (
Recordings.select(
Recordings.path,
Recordings.start_time,
)
.where((ts >= Recordings.start_time) & (ts <= Recordings.end_time))
.where(Recordings.camera == camera)
.order_by(Recordings.start_time.desc())
.limit(1)
.get()
)
time_in_segment = ts - recording.start_time
return get_image_from_recording(
self.config.ffmpeg,
recording.path,
time_in_segment,
"mjpeg",
height=height,
)
except DoesNotExist:
return None
frames = []
for timestamp in timestamps:
try:
# Try to extract frame at exact timestamp
image_data = extract_frame_from_recording(timestamp)
if not image_data:
# Try with rounded timestamp as fallback
rounded_timestamp = math.ceil(timestamp)
image_data = extract_frame_from_recording(rounded_timestamp)
if image_data:
frames.append(image_data)
else:
logger.warning(
f"No recording found for {camera} at timestamp {timestamp}"
)
except Exception as e:
logger.error(
f"Error extracting frame from recording for {camera} at {timestamp}: {e}"
)
continue
return frames
def get_preview_frames_as_bytes(
self,
camera: str,
start_time: float,
end_time: float,
thumb_path_fallback: str,
review_id: str,
save_debug: bool,
) -> list[bytes]:
"""Get preview frames and convert them to JPEG bytes.
Args:
camera: Camera name
start_time: Start timestamp
end_time: End timestamp
thumb_path_fallback: Fallback thumbnail path if no preview frames found
review_id: Review item ID for debug saving
save_debug: Whether to save debug thumbnails
Returns:
List of JPEG image bytes
"""
frame_paths = self.get_cache_frames(camera, start_time, end_time)
if not frame_paths:
frame_paths = [thumb_path_fallback]
thumbs = []
for idx, thumb_path in enumerate(frame_paths):
thumb_data = cv2.imread(thumb_path)
ret, jpg = cv2.imencode(
".jpg", thumb_data, [int(cv2.IMWRITE_JPEG_QUALITY), 100]
)
if ret:
thumbs.append(jpg.tobytes())
if save_debug:
Path(os.path.join(CLIPS_DIR, "genai-requests", review_id)).mkdir(
parents=True, exist_ok=True
)
shutil.copy(
thumb_path,
os.path.join(CLIPS_DIR, f"genai-requests/{review_id}/{idx}.webp"),
)
return thumbs
@staticmethod
def run_analysis(
@ -254,25 +402,25 @@ def run_analysis(
"duration": round(final_data["end_time"] - final_data["start_time"]),
}
objects = []
named_objects = []
unified_objects = []
objects_list = final_data["data"]["objects"]
sub_labels_list = final_data["data"]["sub_labels"]
for i, verified_label in enumerate(final_data["data"]["verified_objects"]):
object_type = verified_label.replace("-verified", "").replace("_", " ")
name = sub_labels_list[i].replace("_", " ").title()
unified_objects.append(f"{name} ({object_type})")
# Add non-verified objects as "Unknown (type)"
for label in objects_list:
if "-verified" in label:
continue
elif label in labelmap_objects:
objects.append(label.replace("_", " ").title())
object_type = label.replace("_", " ")
unified_objects.append(f"Unknown ({object_type})")
for i, verified_label in enumerate(final_data["data"]["verified_objects"]):
named_objects.append(
f"{sub_labels_list[i].replace('_', ' ').title()} ({verified_label.replace('-verified', '')})"
)
analytics_data["objects"] = objects
analytics_data["recognized_objects"] = named_objects
analytics_data["unified_objects"] = unified_objects
metadata = genai_client.generate_review_description(
analytics_data,

View File

@ -34,6 +34,8 @@ except ModuleNotFoundError:
logger = logging.getLogger(__name__)
MAX_OBJECT_CLASSIFICATIONS = 16
class CustomStateClassificationProcessor(RealTimeProcessorApi):
def __init__(
@ -53,6 +55,7 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.tensor_output_details: dict[str, Any] | None = None
self.labelmap: dict[int, str] = {}
self.classifications_per_second = EventsPerSecond()
self.state_history: dict[str, dict[str, Any]] = {}
if (
self.metrics
@ -94,6 +97,42 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
if self.inference_speed:
self.inference_speed.update(duration)
def verify_state_change(self, camera: str, detected_state: str) -> str | None:
"""
Verify state change requires 3 consecutive identical states before publishing.
Returns state to publish or None if verification not complete.
"""
if camera not in self.state_history:
self.state_history[camera] = {
"current_state": None,
"pending_state": None,
"consecutive_count": 0,
}
verification = self.state_history[camera]
if detected_state == verification["current_state"]:
verification["pending_state"] = None
verification["consecutive_count"] = 0
return None
if detected_state == verification["pending_state"]:
verification["consecutive_count"] += 1
if verification["consecutive_count"] >= 3:
verification["current_state"] = detected_state
verification["pending_state"] = None
verification["consecutive_count"] = 0
return detected_state
else:
verification["pending_state"] = detected_state
verification["consecutive_count"] = 1
logger.debug(
f"New state '{detected_state}' detected for {camera}, need {3 - verification['consecutive_count']} more consecutive detections"
)
return None
def process_frame(self, frame_data: dict[str, Any], frame: np.ndarray):
if self.metrics and self.model_config.name in self.metrics.classification_cps:
self.metrics.classification_cps[
@ -131,6 +170,19 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.last_run = now
should_run = True
# Shortcut: always run if we have a pending state verification to complete
if (
not should_run
and camera in self.state_history
and self.state_history[camera]["pending_state"] is not None
and now > self.last_run + 0.5
):
self.last_run = now
should_run = True
logger.debug(
f"Running verification check for pending state: {self.state_history[camera]['pending_state']} ({self.state_history[camera]['consecutive_count']}/3)"
)
if not should_run:
return
@ -188,10 +240,19 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
score,
)
if score >= self.model_config.threshold:
if score < self.model_config.threshold:
logger.debug(
f"Score {score} below threshold {self.model_config.threshold}, skipping verification"
)
return
detected_state = self.labelmap[best_id]
verified_state = self.verify_state_change(camera, detected_state)
if verified_state is not None:
self.requestor.send_data(
f"{camera}/classification/{self.model_config.name}",
self.labelmap[best_id],
verified_state,
)
def handle_request(self, topic, request_data):
@ -230,7 +291,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.sub_label_publisher = sub_label_publisher
self.tensor_input_details: dict[str, Any] | None = None
self.tensor_output_details: dict[str, Any] | None = None
self.detected_objects: dict[str, float] = {}
self.classification_history: dict[str, list[tuple[str, float, float]]] = {}
self.labelmap: dict[int, str] = {}
self.classifications_per_second = EventsPerSecond()
@ -272,6 +333,56 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
if self.inference_speed:
self.inference_speed.update(duration)
def get_weighted_score(
self,
object_id: str,
current_label: str,
current_score: float,
current_time: float,
) -> tuple[str | None, float]:
"""
Determine weighted score based on history to prevent false positives/negatives.
Requires 60% of attempts to agree on a label before publishing.
Returns (weighted_label, weighted_score) or (None, 0.0) if no weighted score.
"""
if object_id not in self.classification_history:
self.classification_history[object_id] = []
self.classification_history[object_id].append(
(current_label, current_score, current_time)
)
history = self.classification_history[object_id]
if len(history) < 3:
return None, 0.0
label_counts = {}
label_scores = {}
total_attempts = len(history)
for label, score, timestamp in history:
if label not in label_counts:
label_counts[label] = 0
label_scores[label] = []
label_counts[label] += 1
label_scores[label].append(score)
best_label = max(label_counts, key=label_counts.get)
best_count = label_counts[best_label]
consensus_threshold = total_attempts * 0.6
if best_count < consensus_threshold:
return None, 0.0
avg_score = sum(label_scores[best_label]) / len(label_scores[best_label])
if best_label == "none":
return None, 0.0
return best_label, avg_score
def process_frame(self, obj_data, frame):
if self.metrics and self.model_config.name in self.metrics.classification_cps:
self.metrics.classification_cps[
@ -284,6 +395,21 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
if obj_data["label"] not in self.model_config.object_config.objects:
return
if obj_data.get("end_time") is not None:
return
if obj_data.get("stationary"):
return
object_id = obj_data["id"]
if (
object_id in self.classification_history
and len(self.classification_history[object_id])
>= MAX_OBJECT_CLASSIFICATIONS
):
return
now = datetime.datetime.now().timestamp()
x, y, x2, y2 = calculate_region(
frame.shape,
@ -315,7 +441,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
write_classification_attempt(
self.train_dir,
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
obj_data["id"],
object_id,
now,
"unknown",
0.0,
@ -331,13 +457,12 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
probs = res / res.sum(axis=0)
best_id = np.argmax(probs)
score = round(probs[best_id], 2)
previous_score = self.detected_objects.get(obj_data["id"], 0.0)
self.__update_metrics(datetime.datetime.now().timestamp() - now)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
obj_data["id"],
object_id,
now,
self.labelmap[best_id],
score,
@ -347,30 +472,34 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
logger.debug(f"Score {score} is less than threshold.")
return
if score <= previous_score:
logger.debug(f"Score {score} is worse than previous score {previous_score}")
return
sub_label = self.labelmap[best_id]
self.detected_objects[obj_data["id"]] = score
if (
self.model_config.object_config.classification_type
== ObjectClassificationType.sub_label
):
if sub_label != "none":
consensus_label, consensus_score = self.get_weighted_score(
object_id, sub_label, score, now
)
if consensus_label is not None:
if (
self.model_config.object_config.classification_type
== ObjectClassificationType.sub_label
):
self.sub_label_publisher.publish(
(obj_data["id"], sub_label, score),
(object_id, consensus_label, consensus_score),
EventMetadataTypeEnum.sub_label,
)
elif (
self.model_config.object_config.classification_type
== ObjectClassificationType.attribute
):
self.sub_label_publisher.publish(
(obj_data["id"], self.model_config.name, sub_label, score),
EventMetadataTypeEnum.attribute.value,
)
elif (
self.model_config.object_config.classification_type
== ObjectClassificationType.attribute
):
self.sub_label_publisher.publish(
(
object_id,
self.model_config.name,
consensus_label,
consensus_score,
),
EventMetadataTypeEnum.attribute.value,
)
def handle_request(self, topic, request_data):
if topic == EmbeddingsRequestEnum.reload_classification_model.value:
@ -388,8 +517,8 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
return None
def expire_object(self, object_id, camera):
if object_id in self.detected_objects:
self.detected_objects.pop(object_id)
if object_id in self.classification_history:
self.classification_history.pop(object_id)
@staticmethod

View File

@ -63,18 +63,21 @@ class GenAIClient:
else:
return ""
def get_verified_objects() -> str:
if review_data["recognized_objects"]:
return " - " + "\n - ".join(review_data["recognized_objects"])
def get_objects_list() -> str:
if review_data["unified_objects"]:
return "\n- " + "\n- ".join(review_data["unified_objects"])
else:
return " None"
return "\n- (No objects detected)"
context_prompt = f"""
Please analyze the sequence of images ({len(thumbnails)} total) taken in chronological order from the perspective of the {review_data["camera"].replace("_", " ")} security camera.
Your task is to analyze the sequence of images ({len(thumbnails)} total) taken in chronological order from the perspective of the {review_data["camera"].replace("_", " ")} security camera.
## Normal Activity Patterns for This Property
**Normal activity patterns for this property:**
{activity_context_prompt}
## Task Instructions
Your task is to provide a clear, accurate description of the scene that:
1. States exactly what is happening based on observable actions and movements.
2. Evaluates whether the observable evidence suggests normal activity for this property or genuine security concerns.
@ -82,8 +85,10 @@ Your task is to provide a clear, accurate description of the scene that:
**IMPORTANT: Start by checking if the activity matches the normal patterns above. If it does, assign Level 0. Only consider higher threat levels if the activity clearly deviates from normal patterns or shows genuine security concerns.**
## Analysis Guidelines
When forming your description:
- **CRITICAL: Only describe objects explicitly listed in "Detected objects" below.** Do not infer or mention additional people, vehicles, or objects not present in the detected objects list, even if visual patterns suggest them. If only a car is detected, do not describe a person interacting with it unless "person" is also in the detected objects list.
- **CRITICAL: Only describe objects explicitly listed in "Objects in Scene" below.** Do not infer or mention additional people, vehicles, or objects not present in this list, even if visual patterns suggest them. If only a car is listed, do not describe a person interacting with it unless "person" is also in the objects list.
- **Only describe actions actually visible in the frames.** Do not assume or infer actions that you don't observe happening. If someone walks toward furniture but you never see them sit, do not say they sat. Stick to what you can see across the sequence.
- Describe what you observe: actions, movements, interactions with objects and the environment. Include any observable environmental changes (e.g., lighting changes triggered by activity).
- Note visible details such as clothing, items being carried or placed, tools or equipment present, and how they interact with the property or objects.
@ -92,29 +97,36 @@ When forming your description:
- Identify patterns that suggest genuine security concerns: testing doors/windows on vehicles or buildings, accessing unauthorized areas, attempting to conceal actions, extended loitering without apparent purpose, taking items, behavior that clearly doesn't align with the zone context and detected objects.
- **Weigh all evidence holistically**: Start by checking if the activity matches the normal patterns above. If it does, assign Level 0. Only consider Level 1 if the activity clearly deviates from normal patterns or shows genuine security concerns that warrant attention.
## Response Format
Your response MUST be a flat JSON object with:
- `title` (string): A concise, one-sentence title that captures the main activity. Include any verified recognized objects (from the "Verified recognized objects" list below) and key detected objects. Examples: "Joe walking dog in backyard", "Unknown person testing car doors at night".
- `title` (string): A concise, one-sentence title that captures the main activity. Use the exact names from "Objects in Scene" below (e.g., if the list shows "Joe (person)" and "Unknown (person)", say "Joe and unknown person"). Examples: "Joe walking dog in backyard", "Unknown person testing car doors at night", "Joe and unknown person in driveway".
- `scene` (string): A narrative description of what happens across the sequence from start to finish. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
- `potential_threat_level` (integer): 0, 1, or 2 as defined below. Your threat level must be consistent with your scene description and the guidance above.
{get_concern_prompt()}
Threat-level definitions:
## Threat Level Definitions
- 0 **Normal activity (DEFAULT)**: What you observe matches the normal activity patterns above or is consistent with expected activity for this property type. The observable evidenceconsidering zone context, detected objects, and timing togethersupports a benign explanation. **Use this level for routine activities even if minor ambiguous elements exist.**
- 1 **Potentially suspicious**: Observable behavior raises genuine security concerns that warrant human review. The evidence doesn't support a routine explanation and clearly deviates from the normal patterns above. Examples: testing doors/windows on vehicles or structures, accessing areas that don't align with the activity, taking items that likely don't belong to them, behavior clearly inconsistent with the zone and context, or activity that lacks any visible legitimate indicators. **Only use this level when the activity clearly doesn't match normal patterns.**
- 2 **Immediate threat**: Clear evidence of forced entry, break-in, vandalism, aggression, weapons, theft in progress, or active property damage.
Sequence details:
## Sequence Details
- Frame 1 = earliest, Frame {len(thumbnails)} = latest
- Activity started at {review_data["start"]} and lasted {review_data["duration"]} seconds
- Detected objects: {", ".join(review_data["objects"])}
- Verified recognized objects (use these names when describing these objects):
{get_verified_objects()}
- Zones involved: {", ".join(z.replace("_", " ").title() for z in review_data["zones"]) or "None"}
**IMPORTANT:**
## Objects in Scene
Each line represents one object in the scene. Named objects are verified identities; "Unknown" indicates unverified objects of that type:
{get_objects_list()}
## Important Notes
- Values must be plain strings, floats, or integers no nested objects, no extra commentary.
- Only describe objects from the "Detected objects" list above. Do not hallucinate additional objects.
- Only describe objects from the "Objects in Scene" list above. Do not hallucinate additional objects.
- When describing people or vehicles, use the exact names provided.
{get_language_prompt()}
"""
logger.debug(
@ -149,7 +161,10 @@ Sequence details:
try:
metadata = ReviewMetadata.model_validate_json(clean_json)
if review_data["recognized_objects"]:
if any(
not obj.startswith("Unknown")
for obj in review_data["unified_objects"]
):
metadata.potential_threat_level = 0
metadata.time = review_data["start"]

View File

@ -52,7 +52,7 @@
"export": "Export",
"selectOrExport": "Select or Export",
"toast": {
"success": "Successfully started export. View the file in the /exports folder.",
"success": "Successfully started export. View the file in the exports page.",
"error": {
"failed": "Failed to start export: {{error}}",
"endTimeMustAfterStartTime": "End time must be after start time",

View File

@ -36,8 +36,8 @@
"video": "video",
"object_lifecycle": "object lifecycle"
},
"objectLifecycle": {
"title": "Object Lifecycle",
"trackingDetails": {
"title": "Tracking Details",
"noImageFound": "No image found for this timestamp.",
"createObjectMask": "Create Object Mask",
"adjustAnnotationSettings": "Adjust annotation settings",
@ -168,9 +168,9 @@
"label": "Download snapshot",
"aria": "Download snapshot"
},
"viewObjectLifecycle": {
"label": "View object lifecycle",
"aria": "Show the object lifecycle"
"viewTrackingDetails": {
"label": "View tracking details",
"aria": "Show the tracking details"
},
"findSimilar": {
"label": "Find similar",
@ -205,7 +205,7 @@
"dialog": {
"confirmDelete": {
"title": "Confirm Delete",
"desc": "Deleting this tracked object removes the snapshot, any saved embeddings, and any associated object lifecycle entries. Recorded footage of this tracked object in History view will <em>NOT</em> be deleted.<br /><br />Are you sure you want to proceed?"
"desc": "Deleting this tracked object removes the snapshot, any saved embeddings, and any associated tracking details entries. Recorded footage of this tracked object in History view will <em>NOT</em> be deleted.<br /><br />Are you sure you want to proceed?"
}
},
"noTrackedObjects": "No Tracked Objects Found",

View File

@ -34,7 +34,7 @@ import { toast } from "sonner";
import useKeyboardListener from "@/hooks/use-keyboard-listener";
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { capitalizeFirstLetter } from "@/utils/stringUtil";
import { buttonVariants } from "../ui/button";
import { Button, buttonVariants } from "../ui/button";
import { Trans, useTranslation } from "react-i18next";
import { cn } from "@/lib/utils";
@ -83,6 +83,11 @@ export default function ReviewCard({
if (response.status == 200) {
toast.success(t("export.toast.success"), {
position: "top-center",
action: (
<a href="/export" target="_blank" rel="noopener noreferrer">
<Button>View</Button>
</a>
),
});
}
})

View File

@ -13,7 +13,7 @@ type SearchThumbnailProps = {
columns: number;
findSimilar: () => void;
refreshResults: () => void;
showObjectLifecycle: () => void;
showTrackingDetails: () => void;
showSnapshot: () => void;
addTrigger: () => void;
};
@ -23,7 +23,7 @@ export default function SearchThumbnailFooter({
columns,
findSimilar,
refreshResults,
showObjectLifecycle,
showTrackingDetails,
showSnapshot,
addTrigger,
}: SearchThumbnailProps) {
@ -61,7 +61,7 @@ export default function SearchThumbnailFooter({
searchResult={searchResult}
findSimilar={findSimilar}
refreshResults={refreshResults}
showObjectLifecycle={showObjectLifecycle}
showTrackingDetails={showTrackingDetails}
showSnapshot={showSnapshot}
addTrigger={addTrigger}
/>

View File

@ -47,7 +47,7 @@ type SearchResultActionsProps = {
searchResult: SearchResult;
findSimilar: () => void;
refreshResults: () => void;
showObjectLifecycle: () => void;
showTrackingDetails: () => void;
showSnapshot: () => void;
addTrigger: () => void;
isContextMenu?: boolean;
@ -58,7 +58,7 @@ export default function SearchResultActions({
searchResult,
findSimilar,
refreshResults,
showObjectLifecycle,
showTrackingDetails,
showSnapshot,
addTrigger,
isContextMenu = false,
@ -125,11 +125,11 @@ export default function SearchResultActions({
)}
{searchResult.data.type == "object" && (
<MenuItem
aria-label={t("itemMenu.viewObjectLifecycle.aria")}
onClick={showObjectLifecycle}
aria-label={t("itemMenu.viewTrackingDetails.aria")}
onClick={showTrackingDetails}
>
<FaArrowsRotate className="mr-2 size-4" />
<span>{t("itemMenu.viewObjectLifecycle.label")}</span>
<span>{t("itemMenu.viewTrackingDetails.label")}</span>
</MenuItem>
)}
{config?.semantic_search?.enabled && isContextMenu && (

View File

@ -95,6 +95,11 @@ export default function ExportDialog({
if (response.status == 200) {
toast.success(t("export.toast.success"), {
position: "top-center",
action: (
<a href="/export" target="_blank" rel="noopener noreferrer">
<Button>View</Button>
</a>
),
});
setName("");
setRange(undefined);

View File

@ -104,6 +104,11 @@ export default function MobileReviewSettingsDrawer({
t("export.toast.success", { ns: "components/dialog" }),
{
position: "top-center",
action: (
<a href="/export" target="_blank" rel="noopener noreferrer">
<Button>View</Button>
</a>
),
},
);
setName("");

View File

@ -1,5 +1,5 @@
import { useMemo, useCallback } from "react";
import { ObjectLifecycleSequence, LifecycleClassType } from "@/types/timeline";
import { TrackingDetailsSequence, LifecycleClassType } from "@/types/timeline";
import { FrigateConfig } from "@/types/frigateConfig";
import useSWR from "swr";
import { useDetailStream } from "@/context/detail-stream-context";
@ -11,38 +11,80 @@ import {
import { TooltipPortal } from "@radix-ui/react-tooltip";
import { cn } from "@/lib/utils";
import { useTranslation } from "react-i18next";
import { Event } from "@/types/event";
type ObjectTrackOverlayProps = {
camera: string;
selectedObjectId: string;
showBoundingBoxes?: boolean;
currentTime: number;
videoWidth: number;
videoHeight: number;
className?: string;
onSeekToTime?: (timestamp: number, play?: boolean) => void;
objectTimeline?: ObjectLifecycleSequence[];
};
type PathPoint = {
x: number;
y: number;
timestamp: number;
lifecycle_item?: TrackingDetailsSequence;
objectId: string;
};
type ObjectData = {
objectId: string;
label: string;
color: string;
pathPoints: PathPoint[];
currentZones: string[];
currentBox?: number[];
};
export default function ObjectTrackOverlay({
camera,
selectedObjectId,
showBoundingBoxes = false,
currentTime,
videoWidth,
videoHeight,
className,
onSeekToTime,
objectTimeline,
}: ObjectTrackOverlayProps) {
const { t } = useTranslation("views/events");
const { data: config } = useSWR<FrigateConfig>("config");
const { annotationOffset } = useDetailStream();
const { annotationOffset, selectedObjectIds } = useDetailStream();
const effectiveCurrentTime = currentTime - annotationOffset / 1000;
// Fetch the full event data to get saved path points
const { data: eventData } = useSWR(["event_ids", { ids: selectedObjectId }]);
// Fetch all event data in a single request (CSV ids)
const { data: eventsData } = useSWR<Event[]>(
selectedObjectIds.length > 0
? ["event_ids", { ids: selectedObjectIds.join(",") }]
: null,
);
// Fetch timeline data for each object ID using fixed number of hooks
const { data: timelineData } = useSWR<TrackingDetailsSequence[]>(
selectedObjectIds.length > 0
? `timeline?source_id=${selectedObjectIds.join(",")}&limit=1000`
: null,
{ revalidateOnFocus: false },
);
const timelineResults = useMemo(() => {
// Group timeline entries by source_id
if (!timelineData) return selectedObjectIds.map(() => []);
const grouped: Record<string, TrackingDetailsSequence[]> = {};
for (const entry of timelineData) {
if (!grouped[entry.source_id]) {
grouped[entry.source_id] = [];
}
grouped[entry.source_id].push(entry);
}
// Return timeline arrays in the same order as selectedObjectIds
return selectedObjectIds.map((id) => grouped[id] || []);
}, [selectedObjectIds, timelineData]);
const typeColorMap = useMemo(
() => ({
@ -58,16 +100,18 @@ export default function ObjectTrackOverlay({
[],
);
const getObjectColor = useMemo(() => {
return (label: string) => {
const getObjectColor = useCallback(
(label: string, objectId: string) => {
const objectColor = config?.model?.colormap[label];
if (objectColor) {
const reversed = [...objectColor].reverse();
return `rgb(${reversed.join(",")})`;
}
return "rgb(255, 0, 0)"; // fallback red
};
}, [config]);
// Fallback to deterministic color based on object ID
return generateColorFromId(objectId);
},
[config],
);
const getZoneColor = useCallback(
(zoneName: string) => {
@ -81,125 +125,121 @@ export default function ObjectTrackOverlay({
[config, camera],
);
const currentObjectZones = useMemo(() => {
if (!objectTimeline) return [];
// Find the most recent timeline event at or before effective current time
const relevantEvents = objectTimeline
.filter((event) => event.timestamp <= effectiveCurrentTime)
.sort((a, b) => b.timestamp - a.timestamp); // Most recent first
// Get zones from the most recent event
return relevantEvents[0]?.data?.zones || [];
}, [objectTimeline, effectiveCurrentTime]);
const zones = useMemo(() => {
if (!config?.cameras?.[camera]?.zones || !currentObjectZones.length)
// Build per-object data structures
const objectsData = useMemo<ObjectData[]>(() => {
if (!eventsData || !Array.isArray(eventsData)) return [];
if (config?.cameras[camera]?.onvif.autotracking.enabled_in_config)
return [];
return selectedObjectIds
.map((objectId, index) => {
const eventData = eventsData.find((e) => e.id === objectId);
const timelineData = timelineResults[index];
// get saved path points from event
const savedPathPoints: PathPoint[] =
eventData?.data?.path_data?.map(
([coords, timestamp]: [number[], number]) => ({
x: coords[0],
y: coords[1],
timestamp,
lifecycle_item: undefined,
objectId,
}),
) || [];
// timeline points for this object
const eventSequencePoints: PathPoint[] =
timelineData
?.filter(
(event: TrackingDetailsSequence) => event.data.box !== undefined,
)
.map((event: TrackingDetailsSequence) => {
const [left, top, width, height] = event.data.box!;
return {
x: left + width / 2, // Center x
y: top + height, // Bottom y
timestamp: event.timestamp,
lifecycle_item: event,
objectId,
};
}) || [];
// show full path once current time has reached the object's start time
const combinedPoints = [...savedPathPoints, ...eventSequencePoints]
.sort((a, b) => a.timestamp - b.timestamp)
.filter(
(point) =>
currentTime >= (eventData?.start_time ?? 0) &&
point.timestamp >= (eventData?.start_time ?? 0) &&
point.timestamp <= (eventData?.end_time ?? Infinity),
);
// Get color for this object
const label = eventData?.label || "unknown";
const color = getObjectColor(label, objectId);
// Get current zones
const currentZones =
timelineData
?.filter(
(event: TrackingDetailsSequence) =>
event.timestamp <= effectiveCurrentTime,
)
.sort(
(a: TrackingDetailsSequence, b: TrackingDetailsSequence) =>
b.timestamp - a.timestamp,
)[0]?.data?.zones || [];
// Get current bounding box
const currentBox = timelineData
?.filter(
(event: TrackingDetailsSequence) =>
event.timestamp <= effectiveCurrentTime && event.data.box,
)
.sort(
(a: TrackingDetailsSequence, b: TrackingDetailsSequence) =>
b.timestamp - a.timestamp,
)[0]?.data?.box;
return {
objectId,
label,
color,
pathPoints: combinedPoints,
currentZones,
currentBox,
};
})
.filter((obj: ObjectData) => obj.pathPoints.length > 0); // Only include objects with path data
}, [
eventsData,
selectedObjectIds,
timelineResults,
currentTime,
effectiveCurrentTime,
getObjectColor,
config,
camera,
]);
// Collect all zones across all objects
const allZones = useMemo(() => {
if (!config?.cameras?.[camera]?.zones) return [];
const zoneNames = new Set<string>();
objectsData.forEach((obj) => {
obj.currentZones.forEach((zone) => zoneNames.add(zone));
});
return Object.entries(config.cameras[camera].zones)
.filter(([name]) => currentObjectZones.includes(name))
.filter(([name]) => zoneNames.has(name))
.map(([name, zone]) => ({
name,
coordinates: zone.coordinates,
color: getZoneColor(name),
}));
}, [config, camera, getZoneColor, currentObjectZones]);
// get saved path points from event
const savedPathPoints = useMemo(() => {
return (
eventData?.[0].data?.path_data?.map(
([coords, timestamp]: [number[], number]) => ({
x: coords[0],
y: coords[1],
timestamp,
lifecycle_item: undefined,
}),
) || []
);
}, [eventData]);
// timeline points for selected event
const eventSequencePoints = useMemo(() => {
return (
objectTimeline
?.filter((event) => event.data.box !== undefined)
.map((event) => {
const [left, top, width, height] = event.data.box!;
return {
x: left + width / 2, // Center x
y: top + height, // Bottom y
timestamp: event.timestamp,
lifecycle_item: event,
};
}) || []
);
}, [objectTimeline]);
// final object path with timeline points included
const pathPoints = useMemo(() => {
// don't display a path for autotracking cameras
if (config?.cameras[camera]?.onvif.autotracking.enabled_in_config)
return [];
const combinedPoints = [...savedPathPoints, ...eventSequencePoints].sort(
(a, b) => a.timestamp - b.timestamp,
);
// Filter points around current time (within a reasonable window)
const timeWindow = 30; // 30 seconds window
return combinedPoints.filter(
(point) =>
point.timestamp >= currentTime - timeWindow &&
point.timestamp <= currentTime + timeWindow,
);
}, [savedPathPoints, eventSequencePoints, config, camera, currentTime]);
// get absolute positions on the svg canvas for each point
const absolutePositions = useMemo(() => {
if (!pathPoints) return [];
return pathPoints.map((point) => {
// Find the corresponding timeline entry for this point
const timelineEntry = objectTimeline?.find(
(entry) => entry.timestamp == point.timestamp,
);
return {
x: point.x * videoWidth,
y: point.y * videoHeight,
timestamp: point.timestamp,
lifecycle_item:
timelineEntry ||
(point.box // normal path point
? {
timestamp: point.timestamp,
camera: camera,
source: "tracked_object",
source_id: selectedObjectId,
class_type: "visible" as LifecycleClassType,
data: {
camera: camera,
label: point.label,
sub_label: "",
box: point.box,
region: [0, 0, 0, 0], // placeholder
attribute: "",
zones: [],
},
}
: undefined),
};
});
}, [
pathPoints,
videoWidth,
videoHeight,
objectTimeline,
camera,
selectedObjectId,
]);
}, [config, camera, objectsData, getZoneColor]);
const generateStraightPath = useCallback(
(points: { x: number; y: number }[]) => {
@ -214,15 +254,20 @@ export default function ObjectTrackOverlay({
);
const getPointColor = useCallback(
(baseColor: number[], type?: string) => {
(baseColorString: string, type?: string) => {
if (type && typeColorMap[type as keyof typeof typeColorMap]) {
const typeColor = typeColorMap[type as keyof typeof typeColorMap];
if (typeColor) {
return `rgb(${typeColor.join(",")})`;
}
}
// normal path point
return `rgb(${baseColor.map((c) => Math.max(0, c - 10)).join(",")})`;
// Parse and darken base color slightly for path points
const match = baseColorString.match(/\d+/g);
if (match) {
const [r, g, b] = match.map(Number);
return `rgb(${Math.max(0, r - 10)}, ${Math.max(0, g - 10)}, ${Math.max(0, b - 10)})`;
}
return baseColorString;
},
[typeColorMap],
);
@ -234,49 +279,8 @@ export default function ObjectTrackOverlay({
[onSeekToTime],
);
// render bounding box for object at current time if we have a timeline entry
const currentBoundingBox = useMemo(() => {
if (!objectTimeline) return null;
// Find the most recent timeline event at or before effective current time with a bounding box
const relevantEvents = objectTimeline
.filter(
(event) => event.timestamp <= effectiveCurrentTime && event.data.box,
)
.sort((a, b) => b.timestamp - a.timestamp); // Most recent first
const currentEvent = relevantEvents[0];
if (!currentEvent?.data.box) return null;
const [left, top, width, height] = currentEvent.data.box;
return {
left,
top,
width,
height,
centerX: left + width / 2,
centerY: top + height,
};
}, [objectTimeline, effectiveCurrentTime]);
const objectColor = useMemo(() => {
return pathPoints[0]?.label
? getObjectColor(pathPoints[0].label)
: "rgb(255, 0, 0)";
}, [pathPoints, getObjectColor]);
const objectColorArray = useMemo(() => {
return pathPoints[0]?.label
? getObjectColor(pathPoints[0].label).match(/\d+/g)?.map(Number) || [
255, 0, 0,
]
: [255, 0, 0];
}, [pathPoints, getObjectColor]);
// render any zones for object at current time
const zonePolygons = useMemo(() => {
return zones.map((zone) => {
return allZones.map((zone) => {
// Convert zone coordinates from normalized (0-1) to pixel coordinates
const points = zone.coordinates
.split(",")
@ -298,9 +302,9 @@ export default function ObjectTrackOverlay({
stroke: zone.color,
};
});
}, [zones, videoWidth, videoHeight]);
}, [allZones, videoWidth, videoHeight]);
if (!pathPoints.length || !config) {
if (objectsData.length === 0 || !config) {
return null;
}
@ -325,73 +329,102 @@ export default function ObjectTrackOverlay({
/>
))}
{absolutePositions.length > 1 && (
<path
d={generateStraightPath(absolutePositions)}
fill="none"
stroke={objectColor}
strokeWidth="5"
strokeLinecap="round"
strokeLinejoin="round"
/>
)}
{objectsData.map((objData) => {
const absolutePositions = objData.pathPoints.map((point) => ({
x: point.x * videoWidth,
y: point.y * videoHeight,
timestamp: point.timestamp,
lifecycle_item: point.lifecycle_item,
}));
{absolutePositions.map((pos, index) => (
<Tooltip key={`point-${index}`}>
<TooltipTrigger asChild>
<circle
cx={pos.x}
cy={pos.y}
r="7"
fill={getPointColor(
objectColorArray,
pos.lifecycle_item?.class_type,
)}
stroke="white"
strokeWidth="3"
style={{ cursor: onSeekToTime ? "pointer" : "default" }}
onClick={() => handlePointClick(pos.timestamp)}
/>
</TooltipTrigger>
<TooltipPortal>
<TooltipContent side="top" className="smart-capitalize">
{pos.lifecycle_item
? `${pos.lifecycle_item.class_type.replace("_", " ")} at ${new Date(pos.timestamp * 1000).toLocaleTimeString()}`
: t("objectTrack.trackedPoint")}
{onSeekToTime && (
<div className="mt-1 text-xs text-muted-foreground">
{t("objectTrack.clickToSeek")}
</div>
)}
</TooltipContent>
</TooltipPortal>
</Tooltip>
))}
return (
<g key={objData.objectId}>
{absolutePositions.length > 1 && (
<path
d={generateStraightPath(absolutePositions)}
fill="none"
stroke={objData.color}
strokeWidth="5"
strokeLinecap="round"
strokeLinejoin="round"
/>
)}
{currentBoundingBox && showBoundingBoxes && (
<g>
<rect
x={currentBoundingBox.left * videoWidth}
y={currentBoundingBox.top * videoHeight}
width={currentBoundingBox.width * videoWidth}
height={currentBoundingBox.height * videoHeight}
fill="none"
stroke={objectColor}
strokeWidth="5"
opacity="0.9"
/>
{absolutePositions.map((pos, index) => (
<Tooltip key={`${objData.objectId}-point-${index}`}>
<TooltipTrigger asChild>
<circle
cx={pos.x}
cy={pos.y}
r="7"
fill={getPointColor(
objData.color,
pos.lifecycle_item?.class_type,
)}
stroke="white"
strokeWidth="3"
style={{ cursor: onSeekToTime ? "pointer" : "default" }}
onClick={() => handlePointClick(pos.timestamp)}
/>
</TooltipTrigger>
<TooltipPortal>
<TooltipContent side="top" className="smart-capitalize">
{pos.lifecycle_item
? `${pos.lifecycle_item.class_type.replace("_", " ")} at ${new Date(pos.timestamp * 1000).toLocaleTimeString()}`
: t("objectTrack.trackedPoint")}
{onSeekToTime && (
<div className="mt-1 text-xs normal-case text-muted-foreground">
{t("objectTrack.clickToSeek")}
</div>
)}
</TooltipContent>
</TooltipPortal>
</Tooltip>
))}
<circle
cx={currentBoundingBox.centerX * videoWidth}
cy={currentBoundingBox.centerY * videoHeight}
r="5"
fill="rgb(255, 255, 0)" // yellow highlight
stroke={objectColor}
strokeWidth="5"
opacity="1"
/>
</g>
)}
{objData.currentBox && showBoundingBoxes && (
<g>
<rect
x={objData.currentBox[0] * videoWidth}
y={objData.currentBox[1] * videoHeight}
width={objData.currentBox[2] * videoWidth}
height={objData.currentBox[3] * videoHeight}
fill="none"
stroke={objData.color}
strokeWidth="5"
opacity="0.9"
/>
<circle
cx={
(objData.currentBox[0] + objData.currentBox[2] / 2) *
videoWidth
}
cy={
(objData.currentBox[1] + objData.currentBox[3]) *
videoHeight
}
r="5"
fill="rgb(255, 255, 0)" // yellow highlight
stroke={objData.color}
strokeWidth="5"
opacity="1"
/>
</g>
)}
</g>
);
})}
</svg>
);
}
// Generate a deterministic HSL color from a string (object ID)
function generateColorFromId(id: string): string {
let hash = 0;
for (let i = 0; i < id.length; i++) {
hash = id.charCodeAt(i) + ((hash << 5) - hash);
}
// Use golden ratio to distribute hues evenly
const hue = (hash * 137.508) % 360;
return `hsl(${hue}, 70%, 50%)`;
}

View File

@ -40,7 +40,7 @@ export default function AnnotationOffsetSlider({ className }: Props) {
);
toast.success(
t("objectLifecycle.annotationSettings.offset.toast.success", {
t("trackingDetails.annotationSettings.offset.toast.success", {
camera,
}),
{ position: "top-center" },

View File

@ -79,7 +79,7 @@ export function AnnotationSettingsPane({
.then((res) => {
if (res.status === 200) {
toast.success(
t("objectLifecycle.annotationSettings.offset.toast.success", {
t("trackingDetails.annotationSettings.offset.toast.success", {
camera: event?.camera,
}),
{
@ -142,7 +142,7 @@ export function AnnotationSettingsPane({
return (
<div className="mb-3 space-y-3 rounded-lg border border-secondary-foreground bg-background_alt p-2">
<Heading as="h4" className="my-2">
{t("objectLifecycle.annotationSettings.title")}
{t("trackingDetails.annotationSettings.title")}
</Heading>
<div className="flex flex-col">
<div className="flex flex-row items-center justify-start gap-2 p-3">
@ -152,11 +152,11 @@ export function AnnotationSettingsPane({
onCheckedChange={setShowZones}
/>
<Label className="cursor-pointer" htmlFor="show-zones">
{t("objectLifecycle.annotationSettings.showAllZones.title")}
{t("trackingDetails.annotationSettings.showAllZones.title")}
</Label>
</div>
<div className="text-sm text-muted-foreground">
{t("objectLifecycle.annotationSettings.showAllZones.desc")}
{t("trackingDetails.annotationSettings.showAllZones.desc")}
</div>
</div>
<Separator className="my-2 flex bg-secondary" />
@ -171,14 +171,14 @@ export function AnnotationSettingsPane({
render={({ field }) => (
<FormItem>
<FormLabel>
{t("objectLifecycle.annotationSettings.offset.label")}
{t("trackingDetails.annotationSettings.offset.label")}
</FormLabel>
<div className="flex flex-col gap-3 md:flex-row-reverse md:gap-8">
<div className="flex flex-row items-center gap-3 rounded-lg bg-destructive/50 p-3 text-sm text-primary-variant md:my-5">
<PiWarningCircle className="size-24" />
<div>
<Trans ns="views/explore">
objectLifecycle.annotationSettings.offset.desc
trackingDetails.annotationSettings.offset.desc
</Trans>
<div className="mt-2 flex items-center text-primary">
<Link
@ -203,10 +203,10 @@ export function AnnotationSettingsPane({
</FormControl>
<FormDescription>
<Trans ns="views/explore">
objectLifecycle.annotationSettings.offset.millisecondsToOffset
trackingDetails.annotationSettings.offset.millisecondsToOffset
</Trans>
<div className="mt-2">
{t("objectLifecycle.annotationSettings.offset.tips")}
{t("trackingDetails.annotationSettings.offset.tips")}
</div>
</FormDescription>
</div>

View File

@ -105,7 +105,7 @@ export function ObjectPath({
<TooltipContent side="top" className="smart-capitalize">
{pos.lifecycle_item
? getLifecycleItemDescription(pos.lifecycle_item)
: t("objectLifecycle.trackedPoint")}
: t("trackingDetails.trackedPoint")}
</TooltipContent>
</TooltipPortal>
</Tooltip>

View File

@ -20,7 +20,7 @@ import { Event } from "@/types/event";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { cn } from "@/lib/utils";
import { FrigatePlusDialog } from "../dialog/FrigatePlusDialog";
import ObjectLifecycle from "./ObjectLifecycle";
import TrackingDetails from "./TrackingDetails";
import Chip from "@/components/indicators/Chip";
import { FaDownload, FaImages, FaShareAlt } from "react-icons/fa";
import FrigatePlusIcon from "@/components/icons/FrigatePlusIcon";
@ -411,7 +411,7 @@ export default function ReviewDetailDialog({
{pane == "details" && selectedEvent && (
<div className="mt-0 flex size-full flex-col gap-2">
<ObjectLifecycle event={selectedEvent} setPane={setPane} />
<TrackingDetails event={selectedEvent} setPane={setPane} />
</div>
)}
</Content>
@ -544,7 +544,7 @@ function EventItem({
</Chip>
</TooltipTrigger>
<TooltipContent>
{t("itemMenu.viewObjectLifecycle.label")}
{t("itemMenu.viewTrackingDetails.label")}
</TooltipContent>
</Tooltip>
)}

View File

@ -34,8 +34,7 @@ import {
FaRegListAlt,
FaVideo,
} from "react-icons/fa";
import { FaRotate } from "react-icons/fa6";
import ObjectLifecycle from "./ObjectLifecycle";
import TrackingDetails from "./TrackingDetails";
import {
MobilePage,
MobilePageContent,
@ -80,12 +79,13 @@ import FaceSelectionDialog from "../FaceSelectionDialog";
import { getTranslatedLabel } from "@/utils/i18n";
import { CgTranscript } from "react-icons/cg";
import { CameraNameLabel } from "@/components/camera/CameraNameLabel";
import { PiPath } from "react-icons/pi";
const SEARCH_TABS = [
"details",
"snapshot",
"video",
"object_lifecycle",
"tracking_details",
] as const;
export type SearchTab = (typeof SEARCH_TABS)[number];
@ -160,7 +160,7 @@ export default function SearchDetailDialog({
}
if (search.data.type != "object" || !search.has_clip) {
const index = views.indexOf("object_lifecycle");
const index = views.indexOf("tracking_details");
views.splice(index, 1);
}
@ -235,9 +235,7 @@ export default function SearchDetailDialog({
{item == "details" && <FaRegListAlt className="size-4" />}
{item == "snapshot" && <FaImage className="size-4" />}
{item == "video" && <FaVideo className="size-4" />}
{item == "object_lifecycle" && (
<FaRotate className="size-4" />
)}
{item == "tracking_details" && <PiPath className="size-4" />}
<div className="smart-capitalize">{t(`type.${item}`)}</div>
</ToggleGroupItem>
))}
@ -268,8 +266,8 @@ export default function SearchDetailDialog({
/>
)}
{page == "video" && <VideoTab search={search} />}
{page == "object_lifecycle" && (
<ObjectLifecycle
{page == "tracking_details" && (
<TrackingDetails
className="w-full overflow-x-hidden"
event={search as unknown as Event}
fullscreen={true}

View File

@ -3,7 +3,7 @@ import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { Event } from "@/types/event";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { Button } from "@/components/ui/button";
import { ObjectLifecycleSequence } from "@/types/timeline";
import { TrackingDetailsSequence } from "@/types/timeline";
import Heading from "@/components/ui/heading";
import { ReviewDetailPaneType } from "@/types/review";
import { FrigateConfig } from "@/types/frigateConfig";
@ -41,30 +41,40 @@ import {
ContextMenuItem,
ContextMenuTrigger,
} from "@/components/ui/context-menu";
import { useNavigate } from "react-router-dom";
import {
DropdownMenu,
DropdownMenuTrigger,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuPortal,
} from "@/components/ui/dropdown-menu";
import { Link, useNavigate } from "react-router-dom";
import { ObjectPath } from "./ObjectPath";
import { getLifecycleItemDescription } from "@/utils/lifecycleUtil";
import { IoPlayCircleOutline } from "react-icons/io5";
import { useTranslation } from "react-i18next";
import { getTranslatedLabel } from "@/utils/i18n";
import { Badge } from "@/components/ui/badge";
import { HiDotsHorizontal } from "react-icons/hi";
import axios from "axios";
import { toast } from "sonner";
type ObjectLifecycleProps = {
type TrackingDetailsProps = {
className?: string;
event: Event;
fullscreen?: boolean;
setPane: React.Dispatch<React.SetStateAction<ReviewDetailPaneType>>;
};
export default function ObjectLifecycle({
export default function TrackingDetails({
className,
event,
fullscreen = false,
setPane,
}: ObjectLifecycleProps) {
}: TrackingDetailsProps) {
const { t } = useTranslation(["views/explore"]);
const { data: eventSequence } = useSWR<ObjectLifecycleSequence[]>([
const { data: eventSequence } = useSWR<TrackingDetailsSequence[]>([
"timeline",
{
source_id: event.id,
@ -94,6 +104,10 @@ export default function ObjectLifecycle({
);
}, [config, event]);
const label = event.sub_label
? event.sub_label
: getTranslatedLabel(event.label);
const getZoneColor = useCallback(
(zoneName: string) => {
const zoneColor =
@ -285,10 +299,10 @@ export default function ObjectLifecycle({
timezone: config.ui.timezone,
date_format:
config.ui.time_format == "24hour"
? t("time.formattedTimestampHourMinuteSecond.24hour", {
? t("time.formattedTimestamp.24hour", {
ns: "common",
})
: t("time.formattedTimestampHourMinuteSecond.12hour", {
: t("time.formattedTimestamp.12hour", {
ns: "common",
}),
time_style: "medium",
@ -301,10 +315,10 @@ export default function ObjectLifecycle({
timezone: config.ui.timezone,
date_format:
config.ui.time_format == "24hour"
? t("time.formattedTimestampHourMinuteSecond.24hour", {
? t("time.formattedTimestamp.24hour", {
ns: "common",
})
: t("time.formattedTimestampHourMinuteSecond.12hour", {
: t("time.formattedTimestamp.12hour", {
ns: "common",
}),
time_style: "medium",
@ -408,6 +422,7 @@ export default function ObjectLifecycle({
return (
<div className={className}>
<span tabIndex={0} className="sr-only" />
{!fullscreen && (
<div className={cn("flex items-center gap-2")}>
<Button
@ -442,7 +457,7 @@ export default function ObjectLifecycle({
<div className="relative aspect-video">
<div className="flex flex-col items-center justify-center p-20 text-center">
<LuFolderX className="size-16" />
{t("objectLifecycle.noImageFound")}
{t("trackingDetails.noImageFound")}
</div>
</div>
)}
@ -553,7 +568,7 @@ export default function ObjectLifecycle({
}
>
<div className="text-primary">
{t("objectLifecycle.createObjectMask")}
{t("trackingDetails.createObjectMask")}
</div>
</div>
</ContextMenuItem>
@ -563,7 +578,7 @@ export default function ObjectLifecycle({
</div>
<div className="mt-3 flex flex-row items-center justify-between">
<Heading as="h4">{t("objectLifecycle.title")}</Heading>
<Heading as="h4">{t("trackingDetails.title")}</Heading>
<div className="flex flex-row gap-2">
<Tooltip>
@ -571,7 +586,7 @@ export default function ObjectLifecycle({
<Button
variant={showControls ? "select" : "default"}
className="size-7 p-1.5"
aria-label={t("objectLifecycle.adjustAnnotationSettings")}
aria-label={t("trackingDetails.adjustAnnotationSettings")}
>
<LuSettings
className="size-5"
@ -581,7 +596,7 @@ export default function ObjectLifecycle({
</TooltipTrigger>
<TooltipPortal>
<TooltipContent>
{t("objectLifecycle.adjustAnnotationSettings")}
{t("trackingDetails.adjustAnnotationSettings")}
</TooltipContent>
</TooltipPortal>
</Tooltip>
@ -589,10 +604,10 @@ export default function ObjectLifecycle({
</div>
<div className="flex flex-row items-center justify-between">
<div className="mb-2 text-sm text-muted-foreground">
{t("objectLifecycle.scrollViewTips")}
{t("trackingDetails.scrollViewTips")}
</div>
<div className="min-w-20 text-right text-sm text-muted-foreground">
{t("objectLifecycle.count", {
{t("trackingDetails.count", {
first: selectedIndex + 1,
second: eventSequence?.length ?? 0,
})}
@ -600,7 +615,7 @@ export default function ObjectLifecycle({
</div>
{config?.cameras[event.camera]?.onvif.autotracking.enabled_in_config && (
<div className="-mt-2 mb-2 text-sm text-danger">
{t("objectLifecycle.autoTrackingTips")}
{t("trackingDetails.autoTrackingTips")}
</div>
)}
{showControls && (
@ -628,17 +643,34 @@ export default function ObjectLifecycle({
}}
role="button"
>
<div className={cn("ml-1 rounded-full bg-muted-foreground p-2")}>
<div
className={cn(
"relative ml-2 rounded-full bg-muted-foreground p-2",
)}
>
{getIconForLabel(
event.label,
"size-6 text-primary dark:text-white",
event.sub_label ? event.label + "-verified" : event.label,
"size-4 text-white",
)}
</div>
<div className="flex items-end gap-2">
<span>{getTranslatedLabel(event.label)}</span>
<div className="flex items-center gap-2">
<span className="capitalize">{label}</span>
<span className="text-secondary-foreground">
{formattedStart ?? ""} - {formattedEnd ?? ""}
</span>
{event.data?.recognized_license_plate && (
<>
<span className="text-secondary-foreground">·</span>
<div className="text-sm text-secondary-foreground">
<Link
to={`/explore?recognized_license_plate=${event.data.recognized_license_plate}`}
className="text-sm"
>
{event.data.recognized_license_plate}
</Link>
</div>
</>
)}
</div>
</div>
</div>
@ -734,7 +766,7 @@ export default function ObjectLifecycle({
}
type GetTimelineIconParams = {
lifecycleItem: ObjectLifecycleSequence;
lifecycleItem: TrackingDetailsSequence;
className?: string;
};
@ -772,7 +804,7 @@ export function LifecycleIcon({
}
type LifecycleIconRowProps = {
item: ObjectLifecycleSequence;
item: TrackingDetailsSequence;
isActive?: boolean;
formattedEventTimestamp: string;
ratio: string;
@ -794,7 +826,11 @@ function LifecycleIconRow({
setSelectedZone,
getZoneColor,
}: LifecycleIconRowProps) {
const { t } = useTranslation(["views/explore"]);
const { t } = useTranslation(["views/explore", "components/player"]);
const { data: config } = useSWR<FrigateConfig>("config");
const [isOpen, setIsOpen] = useState(false);
const navigate = useNavigate();
return (
<div
@ -816,19 +852,21 @@ function LifecycleIconRow({
/>
</div>
<div className="flex w-full flex-row justify-between">
<div className="ml-2 flex w-full min-w-0 flex-1">
<div className="flex flex-col">
<div>{getLifecycleItemDescription(item)}</div>
<div className="mt-1 flex flex-wrap items-center gap-2 text-sm text-secondary-foreground md:gap-5">
<div className="text-md flex items-start break-words text-left">
{getLifecycleItemDescription(item)}
</div>
<div className="mt-1 flex flex-wrap items-center gap-2 text-xs text-secondary-foreground md:gap-5">
<div className="flex items-center gap-1">
<span className="text-primary-variant">
{t("objectLifecycle.lifecycleItemDesc.header.ratio")}
{t("trackingDetails.lifecycleItemDesc.header.ratio")}
</span>
<span className="font-medium text-primary">{ratio}</span>
</div>
<div className="flex items-center gap-1">
<span className="text-primary-variant">
{t("objectLifecycle.lifecycleItemDesc.header.area")}
{t("trackingDetails.lifecycleItemDesc.header.area")}
</span>
{areaPx !== undefined && areaPct !== undefined ? (
<span className="font-medium text-primary">
@ -877,8 +915,71 @@ function LifecycleIconRow({
)}
</div>
</div>
</div>
<div className="ml-3 flex-shrink-0 px-1 text-right text-xs text-primary-variant">
<div className="flex flex-row items-center gap-3">
<div className="whitespace-nowrap">{formattedEventTimestamp}</div>
{(config?.plus?.enabled || item.data.box) && (
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
<DropdownMenuTrigger>
<div className="rounded p-1 pr-2" role="button">
<HiDotsHorizontal className="size-4 text-muted-foreground" />
</div>
</DropdownMenuTrigger>
<DropdownMenuPortal>
<DropdownMenuContent>
{config?.plus?.enabled && (
<DropdownMenuItem
className="cursor-pointer"
onSelect={async () => {
const resp = await axios.post(
`/${item.camera}/plus/${item.timestamp}`,
);
<div className={cn("p-1 text-sm")}>{formattedEventTimestamp}</div>
if (resp && resp.status == 200) {
toast.success(
t("toast.success.submittedFrigatePlus", {
ns: "components/player",
}),
{
position: "top-center",
},
);
} else {
toast.success(
t("toast.error.submitFrigatePlusFailed", {
ns: "components/player",
}),
{
position: "top-center",
},
);
}
}}
>
{t("itemMenu.submitToPlus.label")}
</DropdownMenuItem>
)}
{item.data.box && (
<DropdownMenuItem
className="cursor-pointer"
onSelect={() => {
setIsOpen(false);
setTimeout(() => {
navigate(
`/settings?page=masksAndZones&camera=${item.camera}&object_mask=${item.data.box}`,
);
}, 0);
}}
>
{t("trackingDetails.createObjectMask")}
</DropdownMenuItem>
)}
</DropdownMenuContent>
</DropdownMenuPortal>
</DropdownMenu>
)}
</div>
</div>
</div>
</div>

View File

@ -20,7 +20,6 @@ import { cn } from "@/lib/utils";
import { ASPECT_VERTICAL_LAYOUT, RecordingPlayerError } from "@/types/record";
import { useTranslation } from "react-i18next";
import ObjectTrackOverlay from "@/components/overlay/ObjectTrackOverlay";
import { DetailStreamContextType } from "@/context/detail-stream-context";
// Android native hls does not seek correctly
const USE_NATIVE_HLS = !isAndroid;
@ -54,8 +53,11 @@ type HlsVideoPlayerProps = {
onUploadFrame?: (playTime: number) => Promise<AxiosResponse> | undefined;
toggleFullscreen?: () => void;
onError?: (error: RecordingPlayerError) => void;
detail?: Partial<DetailStreamContextType>;
isDetailMode?: boolean;
camera?: string;
currentTimeOverride?: number;
};
export default function HlsVideoPlayer({
videoRef,
containerRef,
@ -75,17 +77,15 @@ export default function HlsVideoPlayer({
onUploadFrame,
toggleFullscreen,
onError,
detail,
isDetailMode = false,
camera,
currentTimeOverride,
}: HlsVideoPlayerProps) {
const { t } = useTranslation("components/player");
const { data: config } = useSWR<FrigateConfig>("config");
// for detail stream context in History
const selectedObjectId = detail?.selectedObjectId;
const selectedObjectTimeline = detail?.selectedObjectTimeline;
const currentTime = detail?.currentTime;
const camera = detail?.camera;
const isDetailMode = detail?.isDetailMode ?? false;
const currentTime = currentTimeOverride;
// playback
@ -316,16 +316,14 @@ export default function HlsVideoPlayer({
}}
>
{isDetailMode &&
selectedObjectId &&
camera &&
currentTime &&
videoDimensions.width > 0 &&
videoDimensions.height > 0 && (
<div className="absolute z-50 size-full">
<ObjectTrackOverlay
key={`${selectedObjectId}-${currentTime}`}
key={`overlay-${currentTime}`}
camera={camera}
selectedObjectId={selectedObjectId}
showBoundingBoxes={!isPlaying}
currentTime={currentTime}
videoWidth={videoDimensions.width}
@ -336,7 +334,6 @@ export default function HlsVideoPlayer({
onSeekToTime(timestamp, play);
}
}}
objectTimeline={selectedObjectTimeline}
/>
</div>
)}

View File

@ -1,7 +1,7 @@
import { Recording } from "@/types/record";
import { DynamicPlayback } from "@/types/playback";
import { PreviewController } from "../PreviewPlayer";
import { TimeRange, ObjectLifecycleSequence } from "@/types/timeline";
import { TimeRange, TrackingDetailsSequence } from "@/types/timeline";
import { calculateInpointOffset } from "@/utils/videoUtil";
type PlayerMode = "playback" | "scrubbing";
@ -12,7 +12,7 @@ export class DynamicVideoController {
private playerController: HTMLVideoElement;
private previewController: PreviewController;
private setNoRecording: (noRecs: boolean) => void;
private setFocusedItem: (timeline: ObjectLifecycleSequence) => void;
private setFocusedItem: (timeline: TrackingDetailsSequence) => void;
private playerMode: PlayerMode = "playback";
// playback
@ -29,7 +29,7 @@ export class DynamicVideoController {
annotationOffset: number,
defaultMode: PlayerMode,
setNoRecording: (noRecs: boolean) => void,
setFocusedItem: (timeline: ObjectLifecycleSequence) => void,
setFocusedItem: (timeline: TrackingDetailsSequence) => void,
) {
this.camera = camera;
this.playerController = playerController;
@ -132,7 +132,7 @@ export class DynamicVideoController {
});
}
seekToTimelineItem(timeline: ObjectLifecycleSequence) {
seekToTimelineItem(timeline: TrackingDetailsSequence) {
this.playerController.pause();
this.seekToTimestamp(timeline.timestamp + this.annotationOffset);
this.setFocusedItem(timeline);

View File

@ -61,7 +61,11 @@ export default function DynamicVideoPlayer({
const { data: config } = useSWR<FrigateConfig>("config");
// for detail stream context in History
const detail = useDetailStream();
const {
isDetailMode,
camera: contextCamera,
currentTime,
} = useDetailStream();
// controlling playback
@ -295,7 +299,9 @@ export default function DynamicVideoPlayer({
setIsBuffering(true);
}
}}
detail={detail}
isDetailMode={isDetailMode}
camera={contextCamera || camera}
currentTimeOverride={currentTime}
/>
<PreviewPlayer
className={cn(

View File

@ -1,5 +1,5 @@
import { useEffect, useMemo, useRef, useState } from "react";
import { ObjectLifecycleSequence } from "@/types/timeline";
import { TrackingDetailsSequence } from "@/types/timeline";
import { getLifecycleItemDescription } from "@/utils/lifecycleUtil";
import { useDetailStream } from "@/context/detail-stream-context";
import scrollIntoView from "scroll-into-view-if-needed";
@ -22,6 +22,7 @@ import EventMenu from "@/components/timeline/EventMenu";
import { FrigatePlusDialog } from "@/components/overlay/dialog/FrigatePlusDialog";
import { cn } from "@/lib/utils";
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { Link } from "react-router-dom";
type DetailStreamProps = {
reviewItems?: ReviewSegment[];
@ -171,7 +172,11 @@ export default function DetailStream({
<FrigatePlusDialog
upload={upload}
onClose={() => setUpload(undefined)}
onEventUploaded={() => setUpload(undefined)}
onEventUploaded={() => {
if (upload) {
upload.plus_id = "new_upload";
}
}}
/>
<div
@ -254,7 +259,9 @@ function ReviewGroup({
const rawIconLabels: string[] = [
...(fetchedEvents
? fetchedEvents.map((e) => e.label)
? fetchedEvents.map((e) =>
e.sub_label ? e.label + "-verified" : e.label,
)
: (review.data?.objects ?? [])),
...(review.data?.audio ?? []),
];
@ -317,7 +324,7 @@ function ReviewGroup({
<div className="ml-1 flex flex-col items-start gap-1.5">
<div className="flex flex-row gap-3">
<div className="text-sm font-medium">{displayTime}</div>
<div className="flex items-center gap-2">
<div className="relative flex items-center gap-2 text-white">
{iconLabels.slice(0, 5).map((lbl, idx) => (
<div
key={`${lbl}-${idx}`}
@ -423,30 +430,41 @@ function EventList({
}: EventListProps) {
const { data: config } = useSWR<FrigateConfig>("config");
const { selectedObjectId, setSelectedObjectId } = useDetailStream();
const { selectedObjectIds, setSelectedObjectIds, toggleObjectSelection } =
useDetailStream();
const isSelected = selectedObjectIds.includes(event.id);
const label = event.sub_label || getTranslatedLabel(event.label);
const handleObjectSelect = (event: Event | undefined) => {
if (event) {
onSeek(event.start_time ?? 0);
setSelectedObjectId(event.id);
setSelectedObjectIds([]);
setSelectedObjectIds([event.id]);
onSeek(event.start_time);
} else {
setSelectedObjectId(undefined);
setSelectedObjectIds([]);
}
};
// Clear selectedObjectId when effectiveTime has passed this event's end_time
const handleTimelineClick = (ts: number, play?: boolean) => {
handleObjectSelect(event);
onSeek(ts, play);
};
// Clear selection when effectiveTime has passed this event's end_time
useEffect(() => {
if (selectedObjectId === event.id && effectiveTime && event.end_time) {
if (isSelected && effectiveTime && event.end_time) {
if (effectiveTime >= event.end_time) {
setSelectedObjectId(undefined);
toggleObjectSelection(event.id);
}
}
}, [
selectedObjectId,
isSelected,
event.id,
event.end_time,
effectiveTime,
setSelectedObjectId,
toggleObjectSelection,
]);
return (
@ -454,48 +472,68 @@ function EventList({
<div
className={cn(
"rounded-md bg-secondary p-2",
event.id == selectedObjectId
isSelected
? "bg-secondary-highlight"
: "outline-transparent duration-500",
event.id != selectedObjectId &&
(effectiveTime ?? 0) >= (event.start_time ?? 0) - 0.5 &&
(effectiveTime ?? 0) <=
(event.end_time ?? event.start_time ?? 0) + 0.5 &&
"bg-secondary-highlight",
)}
>
<div className="ml-1.5 flex w-full items-center justify-between">
<div
className="flex items-center gap-2 text-sm font-medium"
onClick={(e) => {
e.stopPropagation();
handleObjectSelect(
event.id == selectedObjectId ? undefined : event,
);
}}
role="button"
>
<div className="ml-1.5 flex w-full items-end justify-between">
<div className="flex flex-1 items-center gap-2 text-sm font-medium">
<div
className={cn(
"rounded-full p-1",
event.id == selectedObjectId
"relative rounded-full p-1 text-white",
(effectiveTime ?? 0) >= (event.start_time ?? 0) - 0.5 &&
(effectiveTime ?? 0) <=
(event.end_time ?? event.start_time ?? 0) + 0.5
? "bg-selected"
: "bg-muted-foreground",
)}
onClick={(e) => {
e.stopPropagation();
onSeek(event.start_time);
handleObjectSelect(event);
}}
role="button"
>
{getIconForLabel(event.label, "size-3 text-white")}
{getIconForLabel(
event.sub_label ? event.label + "-verified" : event.label,
"size-3 text-white",
)}
</div>
<div className="flex items-end gap-2">
<span>{getTranslatedLabel(event.label)}</span>
<div
className="flex flex-1 items-center gap-2"
onClick={(e) => {
e.stopPropagation();
onSeek(event.start_time);
handleObjectSelect(event);
}}
role="button"
>
<div className="flex gap-2">
<span className="capitalize">{label}</span>
{event.data?.recognized_license_plate && (
<>
<span className="text-secondary-foreground">·</span>
<div className="text-sm text-secondary-foreground">
<Link
to={`/explore?recognized_license_plate=${event.data.recognized_license_plate}`}
className="text-sm"
>
{event.data.recognized_license_plate}
</Link>
</div>
</>
)}
</div>
</div>
</div>
<div className="mr-2 flex flex-1 flex-row justify-end">
<div className="mr-2 flex flex-row justify-end">
<EventMenu
event={event}
config={config}
onOpenUpload={(e) => onOpenUpload?.(e)}
selectedObjectId={selectedObjectId}
setSelectedObjectId={handleObjectSelect}
isSelected={isSelected}
onToggleSelection={handleObjectSelect}
/>
</div>
</div>
@ -503,8 +541,10 @@ function EventList({
<div className="mt-2">
<ObjectTimeline
eventId={event.id}
onSeek={onSeek}
onSeek={handleTimelineClick}
effectiveTime={effectiveTime}
startTime={event.start_time}
endTime={event.end_time}
/>
</div>
</div>
@ -513,10 +553,11 @@ function EventList({
}
type LifecycleItemProps = {
item: ObjectLifecycleSequence;
item: TrackingDetailsSequence;
isActive?: boolean;
onSeek?: (timestamp: number, play?: boolean) => void;
effectiveTime?: number;
isTimelineActive?: boolean;
};
function LifecycleItem({
@ -524,6 +565,7 @@ function LifecycleItem({
isActive,
onSeek,
effectiveTime,
isTimelineActive = false,
}: LifecycleItemProps) {
const { t } = useTranslation("views/events");
const { data: config } = useSWR<FrigateConfig>("config");
@ -576,7 +618,7 @@ function LifecycleItem({
<div
role="button"
onClick={() => {
onSeek?.(item.timestamp ?? 0, false);
onSeek?.(item.timestamp, false);
}}
className={cn(
"flex cursor-pointer items-center gap-2 text-sm text-primary-variant",
@ -588,16 +630,18 @@ function LifecycleItem({
<div className="relative flex size-4 items-center justify-center">
<LuCircle
className={cn(
"relative z-10 ml-[1px] size-2.5 fill-secondary-foreground stroke-none",
"relative z-10 size-2.5 fill-secondary-foreground stroke-none",
(isActive || (effectiveTime ?? 0) >= (item?.timestamp ?? 0)) &&
isTimelineActive &&
"fill-selected duration-300",
)}
/>
</div>
<div className="flex w-full flex-row justify-between">
<div className="ml-0.5 flex min-w-0 flex-1">
<Tooltip>
<TooltipTrigger>
<div className="flex items-start text-left">
<div className="flex items-start break-words text-left">
{getLifecycleItemDescription(item)}
</div>
</TooltipTrigger>
@ -606,18 +650,20 @@ function LifecycleItem({
<div className="flex flex-col gap-1">
<div className="flex items-start gap-1">
<span className="text-muted-foreground">
{t("objectLifecycle.lifecycleItemDesc.header.ratio")}
{t("trackingDetails.lifecycleItemDesc.header.ratio")}
</span>
<span className="font-medium text-foreground">{ratio}</span>
</div>
<div className="flex items-start gap-1">
<span className="text-muted-foreground">
{t("objectLifecycle.lifecycleItemDesc.header.area")}
{t("trackingDetails.lifecycleItemDesc.header.area")}
</span>
{areaPx !== undefined && areaPct !== undefined ? (
<span className="font-medium text-foreground">
{areaPx} {t("pixels", { ns: "common" })} · {areaPct}%
{areaPx} {t("pixels", { ns: "common" })}{" "}
<span className="text-secondary-foreground">·</span>{" "}
{areaPct}%
</span>
) : (
<span>N/A</span>
@ -627,7 +673,10 @@ function LifecycleItem({
</div>
</TooltipContent>
</Tooltip>
<div className={cn("p-1 text-xs")}>{formattedEventTimestamp}</div>
</div>
<div className="ml-3 flex-shrink-0 px-1 text-right text-xs text-primary-variant">
<div className="whitespace-nowrap">{formattedEventTimestamp}</div>
</div>
</div>
);
@ -638,13 +687,17 @@ function ObjectTimeline({
eventId,
onSeek,
effectiveTime,
startTime,
endTime,
}: {
eventId: string;
onSeek: (ts: number, play?: boolean) => void;
effectiveTime?: number;
startTime?: number;
endTime?: number;
}) {
const { t } = useTranslation("views/events");
const { data: timeline, isValidating } = useSWR<ObjectLifecycleSequence[]>([
const { data: timeline, isValidating } = useSWR<TrackingDetailsSequence[]>([
"timeline",
{
source_id: eventId,
@ -663,9 +716,17 @@ function ObjectTimeline({
);
}
// Check if current time is within the event's start/stop range
const isWithinEventRange =
effectiveTime !== undefined &&
startTime !== undefined &&
endTime !== undefined &&
effectiveTime >= startTime &&
effectiveTime <= endTime;
// Calculate how far down the blue line should extend based on effectiveTime
const calculateLineHeight = () => {
if (!timeline || timeline.length === 0) return 0;
if (!timeline || timeline.length === 0 || !isWithinEventRange) return 0;
const currentTime = effectiveTime ?? 0;
@ -707,15 +768,19 @@ function ObjectTimeline({
);
};
const blueLineHeight = calculateLineHeight();
const activeLineHeight = calculateLineHeight();
return (
<div className="-pb-2 relative mx-2">
<div className="absolute -top-2 bottom-2 left-2 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground" />
<div
className="absolute left-2 top-2 z-[5] max-h-[calc(100%-1rem)] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
style={{ height: `${blueLineHeight}%` }}
/>
{isWithinEventRange && (
<div
className={cn(
"absolute left-2 top-2 z-[5] max-h-[calc(100%-1rem)] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300",
)}
style={{ height: `${activeLineHeight}%` }}
/>
)}
<div className="space-y-2">
{timeline.map((event, idx) => {
const isActive =
@ -728,6 +793,7 @@ function ObjectTimeline({
onSeek={onSeek}
isActive={isActive}
effectiveTime={effectiveTime}
isTimelineActive={isWithinEventRange}
/>
);
})}

View File

@ -12,14 +12,15 @@ import { useNavigate } from "react-router-dom";
import { useTranslation } from "react-i18next";
import { Event } from "@/types/event";
import { FrigateConfig } from "@/types/frigateConfig";
import { useState } from "react";
type EventMenuProps = {
event: Event;
config?: FrigateConfig;
onOpenUpload?: (e: Event) => void;
onOpenSimilarity?: (e: Event) => void;
selectedObjectId?: string;
setSelectedObjectId?: (event: Event | undefined) => void;
isSelected?: boolean;
onToggleSelection?: (event: Event | undefined) => void;
};
export default function EventMenu({
@ -27,25 +28,26 @@ export default function EventMenu({
config,
onOpenUpload,
onOpenSimilarity,
selectedObjectId,
setSelectedObjectId,
isSelected = false,
onToggleSelection,
}: EventMenuProps) {
const apiHost = useApiHost();
const navigate = useNavigate();
const { t } = useTranslation("views/explore");
const [isOpen, setIsOpen] = useState(false);
const handleObjectSelect = () => {
if (event.id === selectedObjectId) {
setSelectedObjectId?.(undefined);
if (isSelected) {
onToggleSelection?.(undefined);
} else {
setSelectedObjectId?.(event);
onToggleSelection?.(event);
}
};
return (
<>
<span tabIndex={0} className="sr-only" />
<DropdownMenu>
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
<DropdownMenuTrigger>
<div className="rounded p-1 pr-2" role="button">
<HiDotsHorizontal className="size-4 text-muted-foreground" />
@ -54,7 +56,7 @@ export default function EventMenu({
<DropdownMenuPortal>
<DropdownMenuContent>
<DropdownMenuItem onSelect={handleObjectSelect}>
{event.id === selectedObjectId
{isSelected
? t("itemMenu.hideObjectDetails.label")
: t("itemMenu.showObjectDetails.label")}
</DropdownMenuItem>
@ -85,6 +87,7 @@ export default function EventMenu({
config?.plus?.enabled && (
<DropdownMenuItem
onSelect={() => {
setIsOpen(false);
onOpenUpload?.(event);
}}
>

View File

@ -212,13 +212,13 @@ const CarouselPrevious = React.forwardRef<
: "-top-12 left-1/2 -translate-x-1/2 rotate-90",
className,
)}
aria-label={t("objectLifecycle.carousel.previous")}
aria-label={t("trackingDetails.carousel.previous")}
disabled={!canScrollPrev}
onClick={scrollPrev}
{...props}
>
<ArrowLeft className="h-4 w-4" />
<span className="sr-only">{t("objectLifecycle.carousel.previous")}</span>
<span className="sr-only">{t("trackingDetails.carousel.previous")}</span>
</Button>
);
});
@ -243,13 +243,13 @@ const CarouselNext = React.forwardRef<
: "-bottom-12 left-1/2 -translate-x-1/2 rotate-90",
className,
)}
aria-label={t("objectLifecycle.carousel.next")}
aria-label={t("trackingDetails.carousel.next")}
disabled={!canScrollNext}
onClick={scrollNext}
{...props}
>
<ArrowRight className="h-4 w-4" />
<span className="sr-only">{t("objectLifecycle.carousel.next")}</span>
<span className="sr-only">{t("trackingDetails.carousel.next")}</span>
</Button>
);
});

View File

@ -1,16 +1,15 @@
import React, { createContext, useContext, useState, useEffect } from "react";
import { FrigateConfig } from "@/types/frigateConfig";
import useSWR from "swr";
import { ObjectLifecycleSequence } from "@/types/timeline";
export interface DetailStreamContextType {
selectedObjectId: string | undefined;
selectedObjectTimeline?: ObjectLifecycleSequence[];
selectedObjectIds: string[];
currentTime: number;
camera: string;
annotationOffset: number; // milliseconds
setSelectedObjectIds: React.Dispatch<React.SetStateAction<string[]>>;
setAnnotationOffset: (ms: number) => void;
setSelectedObjectId: (id: string | undefined) => void;
toggleObjectSelection: (id: string | undefined) => void;
isDetailMode: boolean;
}
@ -31,13 +30,21 @@ export function DetailStreamProvider({
currentTime,
camera,
}: DetailStreamProviderProps) {
const [selectedObjectId, setSelectedObjectId] = useState<
string | undefined
>();
const [selectedObjectIds, setSelectedObjectIds] = useState<string[]>([]);
const { data: selectedObjectTimeline } = useSWR<ObjectLifecycleSequence[]>(
selectedObjectId ? ["timeline", { source_id: selectedObjectId }] : null,
);
const toggleObjectSelection = (id: string | undefined) => {
if (id === undefined) {
setSelectedObjectIds([]);
} else {
setSelectedObjectIds((prev) => {
if (prev.includes(id)) {
return prev.filter((existingId) => existingId !== id);
} else {
return [...prev, id];
}
});
}
};
const { data: config } = useSWR<FrigateConfig>("config");
@ -52,14 +59,19 @@ export function DetailStreamProvider({
setAnnotationOffset(cfgOffset);
}, [config, camera]);
// Clear selected objects when exiting detail mode or changing cameras
useEffect(() => {
setSelectedObjectIds([]);
}, [isDetailMode, camera]);
const value: DetailStreamContextType = {
selectedObjectId,
selectedObjectTimeline,
selectedObjectIds,
currentTime,
camera,
annotationOffset,
setAnnotationOffset,
setSelectedObjectId,
setSelectedObjectIds,
toggleObjectSelection,
isDetailMode,
};

View File

@ -22,6 +22,7 @@ export interface Event {
area: number;
ratio: number;
type: "object" | "audio" | "manual";
recognized_license_plate?: string;
path_data: [number[], number][];
};
}

View File

@ -10,7 +10,7 @@ export enum LifecycleClassType {
PATH_POINT = "path_point",
}
export type ObjectLifecycleSequence = {
export type TrackingDetailsSequence = {
camera: string;
timestamp: number;
data: {
@ -38,5 +38,5 @@ export type Position = {
x: number;
y: number;
timestamp: number;
lifecycle_item?: ObjectLifecycleSequence;
lifecycle_item?: TrackingDetailsSequence;
};

View File

@ -1,37 +1,38 @@
import { ObjectLifecycleSequence } from "@/types/timeline";
import { TrackingDetailsSequence } from "@/types/timeline";
import { t } from "i18next";
import { getTranslatedLabel } from "./i18n";
import { capitalizeFirstLetter } from "./stringUtil";
export function getLifecycleItemDescription(
lifecycleItem: ObjectLifecycleSequence,
lifecycleItem: TrackingDetailsSequence,
) {
const rawLabel = Array.isArray(lifecycleItem.data.sub_label)
? lifecycleItem.data.sub_label[0]
: lifecycleItem.data.sub_label || lifecycleItem.data.label;
const label = lifecycleItem.data.sub_label
? rawLabel
? capitalizeFirstLetter(rawLabel)
: getTranslatedLabel(rawLabel);
switch (lifecycleItem.class_type) {
case "visible":
return t("objectLifecycle.lifecycleItemDesc.visible", {
return t("trackingDetails.lifecycleItemDesc.visible", {
ns: "views/explore",
label,
});
case "entered_zone":
return t("objectLifecycle.lifecycleItemDesc.entered_zone", {
return t("trackingDetails.lifecycleItemDesc.entered_zone", {
ns: "views/explore",
label,
zones: lifecycleItem.data.zones.join(" and ").replaceAll("_", " "),
});
case "active":
return t("objectLifecycle.lifecycleItemDesc.active", {
return t("trackingDetails.lifecycleItemDesc.active", {
ns: "views/explore",
label,
});
case "stationary":
return t("objectLifecycle.lifecycleItemDesc.stationary", {
return t("trackingDetails.lifecycleItemDesc.stationary", {
ns: "views/explore",
label,
});
@ -42,7 +43,7 @@ export function getLifecycleItemDescription(
lifecycleItem.data.attribute == "license_plate"
) {
title = t(
"objectLifecycle.lifecycleItemDesc.attribute.faceOrLicense_plate",
"trackingDetails.lifecycleItemDesc.attribute.faceOrLicense_plate",
{
ns: "views/explore",
label,
@ -52,7 +53,7 @@ export function getLifecycleItemDescription(
},
);
} else {
title = t("objectLifecycle.lifecycleItemDesc.attribute.other", {
title = t("trackingDetails.lifecycleItemDesc.attribute.other", {
ns: "views/explore",
label: lifecycleItem.data.label,
attribute: getTranslatedLabel(
@ -63,17 +64,17 @@ export function getLifecycleItemDescription(
return title;
}
case "gone":
return t("objectLifecycle.lifecycleItemDesc.gone", {
return t("trackingDetails.lifecycleItemDesc.gone", {
ns: "views/explore",
label,
});
case "heard":
return t("objectLifecycle.lifecycleItemDesc.heard", {
return t("trackingDetails.lifecycleItemDesc.heard", {
ns: "views/explore",
label,
});
case "external":
return t("objectLifecycle.lifecycleItemDesc.external", {
return t("trackingDetails.lifecycleItemDesc.external", {
ns: "views/explore",
label,
});

View File

@ -11,7 +11,6 @@ import {
FrigateConfig,
} from "@/types/frigateConfig";
import { useEffect, useMemo, useState } from "react";
import { isMobile } from "react-device-detect";
import { useTranslation } from "react-i18next";
import { FaFolderPlus } from "react-icons/fa";
import { MdModelTraining } from "react-icons/md";
@ -131,7 +130,7 @@ export default function ModelSelectionView({
</Button>
</div>
</div>
<div className="flex size-full gap-2 p-2">
<div className="grid auto-rows-max grid-cols-2 gap-2 overflow-y-auto p-2 md:grid-cols-4 lg:grid-cols-5 xl:grid-cols-6 2xl:grid-cols-8 3xl:grid-cols-10">
{selectedClassificationConfigs.length === 0 ? (
<NoModelsView
onCreateModel={() => setNewModel(true)}
@ -208,14 +207,13 @@ function ModelCard({ config, onClick }: ModelCardProps) {
<div
key={config.name}
className={cn(
"relative size-60 cursor-pointer overflow-hidden rounded-lg",
"relative aspect-square w-full cursor-pointer overflow-hidden rounded-lg",
"outline-transparent duration-500",
isMobile && "w-full",
)}
onClick={() => onClick()}
>
<img
className={cn("size-full", isMobile && "w-full")}
className="size-full"
src={`${baseUrl}clips/${config.name}/dataset/${coverImage?.name}/${coverImage?.img}`}
/>
<ImageShadowOverlay />

View File

@ -202,6 +202,11 @@ export default function EventView({
t("export.toast.success", { ns: "components/dialog" }),
{
position: "top-center",
action: (
<a href="/export" target="_blank" rel="noopener noreferrer">
<Button>View</Button>
</a>
),
},
);
}

View File

@ -232,8 +232,8 @@ function ExploreThumbnailImage({
}
};
const handleShowObjectLifecycle = () => {
onSelectSearch(event, false, "object_lifecycle");
const handleShowTrackingDetails = () => {
onSelectSearch(event, false, "tracking_details");
};
const handleShowSnapshot = () => {
@ -251,7 +251,7 @@ function ExploreThumbnailImage({
searchResult={event}
findSimilar={handleFindSimilar}
refreshResults={mutate}
showObjectLifecycle={handleShowObjectLifecycle}
showTrackingDetails={handleShowTrackingDetails}
showSnapshot={handleShowSnapshot}
addTrigger={handleAddTrigger}
isContextMenu={true}

View File

@ -11,6 +11,7 @@ import DetailStream from "@/components/timeline/DetailStream";
import { Button } from "@/components/ui/button";
import { ToggleGroup, ToggleGroupItem } from "@/components/ui/toggle-group";
import { useOverlayState } from "@/hooks/use-overlay-state";
import { useResizeObserver } from "@/hooks/resize-observer";
import { ExportMode } from "@/types/filter";
import { FrigateConfig } from "@/types/frigateConfig";
import { Preview } from "@/types/preview";
@ -31,12 +32,7 @@ import {
useRef,
useState,
} from "react";
import {
isDesktop,
isMobile,
isMobileOnly,
isTablet,
} from "react-device-detect";
import { isDesktop, isMobile } from "react-device-detect";
import { IoMdArrowRoundBack } from "react-icons/io";
import { useNavigate } from "react-router-dom";
import { Toaster } from "@/components/ui/sonner";
@ -55,7 +51,6 @@ import {
RecordingSegment,
RecordingStartingPoint,
} from "@/types/record";
import { useResizeObserver } from "@/hooks/resize-observer";
import { cn } from "@/lib/utils";
import { useFullscreen } from "@/hooks/use-fullscreen";
import { useTimezone } from "@/hooks/use-date-utils";
@ -399,49 +394,47 @@ export function RecordingView({
}
}, [mainCameraAspect]);
const [{ width: mainWidth, height: mainHeight }] =
// use a resize observer to determine whether to use w-full or h-full based on container aspect ratio
const [{ width: containerWidth, height: containerHeight }] =
useResizeObserver(cameraLayoutRef);
const [{ width: previewRowWidth, height: previewRowHeight }] =
useResizeObserver(previewRowRef);
const mainCameraStyle = useMemo(() => {
if (isMobile || mainCameraAspect != "normal" || !config) {
return undefined;
const useHeightBased = useMemo(() => {
if (!containerWidth || !containerHeight) {
return false;
}
const camera = config.cameras[mainCamera];
if (!camera) {
return undefined;
const cameraAspectRatio = getCameraAspect(mainCamera);
if (!cameraAspectRatio) {
return false;
}
const aspect = getCameraAspect(mainCamera);
// Calculate available space for camera after accounting for preview row
// For tall cameras: preview row is side-by-side (takes width)
// For wide/normal cameras: preview row is stacked (takes height)
const availableWidth =
mainCameraAspect == "tall" && previewRowWidth
? containerWidth - previewRowWidth
: containerWidth;
const availableHeight =
mainCameraAspect != "tall" && previewRowHeight
? containerHeight - previewRowHeight
: containerHeight;
if (!aspect) {
return undefined;
}
const availableAspectRatio = availableWidth / availableHeight;
const availableHeight = mainHeight - 112;
let percent;
if (mainWidth / availableHeight < aspect) {
percent = 100;
} else {
const availableWidth = aspect * availableHeight;
percent =
(mainWidth < availableWidth
? mainWidth / availableWidth
: availableWidth / mainWidth) * 100;
}
return {
width: `${Math.round(percent)}%`,
};
// If available space is wider than camera aspect, constrain by height (h-full)
// If available space is taller than camera aspect, constrain by width (w-full)
return availableAspectRatio >= cameraAspectRatio;
}, [
config,
mainCameraAspect,
mainWidth,
mainHeight,
mainCamera,
containerWidth,
containerHeight,
previewRowWidth,
previewRowHeight,
getCameraAspect,
mainCamera,
mainCameraAspect,
]);
const previewRowOverflows = useMemo(() => {
@ -685,19 +678,17 @@ export function RecordingView({
<div
ref={mainLayoutRef}
className={cn(
"flex h-full justify-center overflow-hidden",
isDesktop ? "" : "flex-col gap-2 landscape:flex-row",
"flex flex-1 overflow-hidden",
isDesktop ? "flex-row" : "flex-col gap-2 landscape:flex-row",
)}
>
<div
ref={cameraLayoutRef}
className={cn(
"flex flex-1 flex-wrap",
"flex flex-1 flex-wrap overflow-hidden",
isDesktop
? timelineType === "detail"
? "md:w-[40%] lg:w-[70%] xl:w-full"
: "w-[80%]"
: "",
? "min-w-0 px-4"
: "portrait:max-h-[50dvh] portrait:flex-shrink-0 portrait:flex-grow-0 portrait:basis-auto",
)}
>
<div
@ -711,37 +702,24 @@ export function RecordingView({
<div
key={mainCamera}
className={cn(
"relative",
"relative flex max-h-full min-h-0 min-w-0 max-w-full items-center justify-center",
isDesktop
? cn(
"flex justify-center px-4",
mainCameraAspect == "tall"
? "h-[50%] md:h-[60%] lg:h-[75%] xl:h-[90%]"
: mainCameraAspect == "wide"
? "w-full"
: "",
)
? // Desktop: dynamically switch between w-full and h-full based on
// container vs camera aspect ratio to ensure proper fitting
useHeightBased
? "h-full"
: "w-full"
: cn(
"pt-2 portrait:w-full",
isMobileOnly &&
(mainCameraAspect == "wide"
? "aspect-wide landscape:w-full"
: "aspect-video landscape:h-[94%] landscape:xl:h-[65%]"),
isTablet &&
(mainCameraAspect == "wide"
? "aspect-wide landscape:w-full"
: mainCameraAspect == "normal"
? "landscape:w-full"
: "aspect-video landscape:h-[100%]"),
"flex-shrink-0 portrait:w-full landscape:h-full",
mainCameraAspect == "wide"
? "aspect-wide"
: mainCameraAspect == "tall"
? "aspect-tall portrait:h-full"
: "aspect-video",
),
)}
style={{
width: mainCameraStyle ? mainCameraStyle.width : undefined,
aspectRatio: isDesktop
? mainCameraAspect == "tall"
? getCameraAspect(mainCamera)
: undefined
: Math.max(1, getCameraAspect(mainCamera) ?? 0),
aspectRatio: getCameraAspect(mainCamera),
}}
>
{isDesktop && (
@ -782,10 +760,10 @@ export function RecordingView({
<div
ref={previewRowRef}
className={cn(
"scrollbar-container flex gap-2 overflow-auto",
"scrollbar-container flex flex-shrink-0 gap-2 overflow-auto",
mainCameraAspect == "tall"
? "h-full w-72 flex-col"
: `h-28 w-full`,
? "ml-2 h-full w-72 min-w-72 flex-col"
: "h-28 min-h-28 w-full",
previewRowOverflows ? "" : "items-center justify-center",
timelineType == "detail" && isDesktop && "mt-4",
)}
@ -971,10 +949,23 @@ function Timeline({
return (
<div
className={cn(
"relative",
"relative overflow-hidden",
isDesktop
? `${timelineType == "timeline" ? "w-[100px]" : timelineType == "detail" ? "w-[30%] min-w-[350px]" : "w-60"} no-scrollbar overflow-y-auto`
: `overflow-hidden portrait:flex-grow ${timelineType == "timeline" ? "landscape:w-[100px]" : timelineType == "detail" && isDesktop ? "flex-1" : "landscape:w-[300px]"} `,
? cn(
"no-scrollbar overflow-y-auto",
timelineType == "timeline"
? "w-[100px] flex-shrink-0"
: timelineType == "detail"
? "min-w-[20rem] max-w-[30%] flex-shrink-0 flex-grow-0 basis-[30rem] md:min-w-[20rem] md:max-w-[25%] lg:min-w-[30rem] lg:max-w-[33%]"
: "w-60 flex-shrink-0",
)
: cn(
timelineType == "timeline"
? "portrait:flex-grow landscape:w-[100px] landscape:flex-shrink-0"
: timelineType == "detail"
? "portrait:flex-grow landscape:w-[19rem] landscape:flex-shrink-0"
: "portrait:flex-grow landscape:w-[19rem] landscape:flex-shrink-0",
),
)}
>
{isMobile && (

View File

@ -644,8 +644,8 @@ export default function SearchView({
}
}}
refreshResults={refresh}
showObjectLifecycle={() =>
onSelectSearch(value, false, "object_lifecycle")
showTrackingDetails={() =>
onSelectSearch(value, false, "tracking_details")
}
showSnapshot={() =>
onSelectSearch(value, false, "snapshot")