Complete F-SBID custom application signature reference for identifying local and network-hosted LLM infrastructure, client tooling, base model families, and fine-tune organizations. Covers OpenAI-compatible and Anthropic-compatible API formats.
39Total Sigs
11Infra
19Base Models
9Fine-Tuners
2API Formats
§1
Complete Signature Index
38 signatures · 4 categories
#
Signature Name
Category
Match Field
Pattern
Weight
Purpose
01
LM.Studio.Native.API
Infra
URI
/api/v0/
50
LM Studio native REST API v0 — unique, no cloud overlap
01b
LM.Studio.Native.APIv1
Infra
URI+Header
/api/v1/ + node
50
LM Studio native REST API v1 — anchored by node User-Agent
02
AnythingLLM.API
Infra
URI
/api/v1/workspace
50
AnythingLLM server endpoint
03
Local.LLM.OpenAI.Compat
Infra
URI
/v1/chat/completions
30
Any OpenAI-format local LLM server
04
AnythingLLM.OpenAI.SDK
Client
Header
OpenAI/JS
55
AnythingLLM via OpenAI JS SDK (chat POST flows)
04b
AnythingLLM.NativeAPI
Client
URI+Header×2
/api/v + node + sec-fetch-mode
53
AnythingLLM model enumeration via LM Studio native API
05
Local.LLM.Anthropic.Compat
Infra
URI
/v1/messages
25
Any Anthropic-format local LLM server
06
LM.Studio.Anthropic.API
Infra
URI+Header
/v1/messages + claude-cli
55
LM Studio Anthropic endpoint (Claude Code path)
07
Client.ClaudeCode
Client
Header
claude-cli
55
Claude Code CLI client
08
Client.OpenAI.Python.SDK
Client
Header
AsyncOpenAI/Python
50
Python openai SDK — scripted access
09
Client.CherryStudio
Client
Header
CherryStudio
55
Cherry Studio Electron client — confirmed v1.9.4
10
Model.Llama
Base Model
Body
llama-
60
Meta Llama all variants
10
Model.Mistral
Base Model
Body
mistral
60
Mistral AI family inc. Mixtral, Devstral
11
Model.Phi
Base Model
Body
phi-
60
Microsoft Phi-3, Phi-4
12
Model.Gemma
Base Model
Body
gemma-
60
Google Gemma family
13
Model.Qwen
Base Model
Body
qwen
60
Alibaba Qwen family
14
Model.DeepSeek
Base Model
Body
deepseek
65
DeepSeek AI — elevated weight for distilled models
15
Model.Nemotron
Base Model
Body
nemotron
60
NVIDIA Nemotron
16
Model.LFM
Base Model
Body
lfm
60
Liquid AI LFM series
17
Model.GLM
Base Model
Body
glm
60
Z.ai GLM series
18
Model.Granite
Base Model
Body
granite-
60
IBM Granite
19
Model.GPT-OSS
Base Model
Body
gpt-oss
60
OpenAI open source models
20
Model.OLMo
Base Model
Body
olmo
60
Allen AI OLMo / olmOCR
21
Model.Ernie
Base Model
Body
ernie-
60
Baidu Ernie
22
Model.MiniMax
Base Model
Body
minimax
60
MiniMax M2
23
Model.Falcon
Base Model
Body
falcon-
60
TII Falcon
24
Model.Command
Base Model
Body
command-r
60
Cohere Command-R, Command-R+, Command-R7B
25
Model.InternLM
Base Model
Body
internlm
60
Shanghai AI Lab InternLM
26
Model.Solar
Base Model
Body
solar-
60
Upstage Solar
27
Model.Kimi
Base Model
Body
kimi
60
Moonshot AI Kimi — K2, K2.5, K2.6
28
Model.Hermes
Fine-Tune
Body
hermes
62
NousResearch Hermes 2/3/4, OpenHermes
29
Model.Dolphin
⚠ Uncensored
Body
dolphin
68
Eric Hartford Dolphin — all safety refusals removed
30
Model.Zephyr
Fine-Tune
Body
zephyr
62
HuggingFace H4 Zephyr
31
Model.OpenChat
Fine-Tune
Body
openchat
62
OpenChat Project 3.5/3.6
32
Model.Wizard
Fine-Tune
Body
wizard
62
Microsoft Research WizardLM / WizardCoder / WizardMath
33
Model.Vicuna
Fine-Tune
Body
vicuna
62
UC Berkeley LMSYS Vicuna
34
Model.Orca
Fine-Tune
Body
orca
62
Microsoft Orca-2 / OpenOrca community
35
Model.Airoboros
Fine-Tune
Body
airoboros
62
jondurbin Airoboros creative/roleplay
36
Model.Phind
Fine-Tune
Body
phind
62
Phind code-specialized CodeLlama
§2
Weight Hierarchy
Priority (highest wins):
68 Dolphin — uncensored
65 DeepSeek — distilled priority
62 Fine-tune orgs
60 Base model families
55 Client signatures
53 AnythingLLM.NativeAPI
50 App/server signatures
30 OpenAI catch-all
25 Anthropic catch-all
ℹ
How weights interact: When multiple signatures match the same session, FortiOS selects the highest weight. Fine-tuner signatures (62) outrank base model signatures (60) — nous-hermes-2-llama-3.1-8b correctly identifies as Model.Hermes, not Model.Llama. Client signatures (weight 55) yield to base model signatures (weight 60) on the same flow — when a model is identified, the model wins. Client signatures appear in logs for flows where no model signature fires (e.g., an unrecognized model name). AnythingLLM.NativeAPI (weight 53) sits between client and infrastructure sigs — it wins over LM.Studio.Native.APIv1 (50) on model enumeration flows where AnythingLLM's Node.js fetch is the client, while LM Studio's own http-module polling (no sec-fetch-mode) still correctly fires at weight 50. Infrastructure signatures (URI-based, weight 50 or lower) operate on distinct request types and do not compete with model signatures.
§3
Important Notes
⚠
SECURITY — Dolphin (weight 68): The Dolphin series explicitly removes all AI content safety refusals. Users running Dolphin locally bypass AI content guardrails entirely. Recommended as the first signature promoted from Monitor to Block.
⚡
Body Matching Limitation: The IPS engine does not reassemble HTTP body data across TCP packets. The "model" field appears at the start of every OpenAI-compatible POST body and is reliably in the first packet for normal requests. Verify with packet capture if matching issues arise.
⚡
Claude Code uses Anthropic format, not OpenAI format: Claude Code sends requests to /v1/messages, not /v1/chat/completions. Per-model signatures (09–35) only match on /v1/chat/completions. Use Client.ClaudeCode (#07) and LM.Studio.Anthropic.API (#06) to detect Claude Code traffic.
✓
Deep Packet Inspection required for HTTPS: Signatures using --service HTTP automatically cover both HTTP and HTTPS. HTTPS inspection requires an SSL deep inspection profile on the firewall policy.
ℹ
FortiOS 7.6.4 minimum required for Category 36 (GenAI): Category 36 was introduced in FortiOS 7.6.4. These signatures have been tested and confirmed on 7.6.6. If you are running 7.6.3 or earlier, Category 36 will not be available — substitute the most appropriate category available in your version. The F-SBID signature syntax itself works on earlier FortiOS versions.
ℹ
FortiGuard native signatures: FortiGuard already covers Ollama, Claude (cloud), ChatGPT, and other cloud AI services. These signatures cover the same API formats on local/internal network traffic. They complement — not replace — FortiGuard signatures.
ℹ
Application and Filter Overrides — per-application only: After saving signatures, add them to your Application Control profile under Security Profiles → Application Control → Application and Filter Overrides. Overrides operate at the per-application level — multi-select individual signatures and assign an action (Monitor or Block). The GenAI category (36) is already present in the global Application Control settings and does not need to be added as a separate override entry.
⚡
Layer 2 traffic — micro-segmentation required: These signatures only inspect traffic that crosses the FortiGate at layer 3. If the inference server and the client are on the same subnet, their traffic will not pass through the FortiGate and these signatures will not fire. Micro-segmentation — placing inference servers on a dedicated VLAN that routes through the FortiGate — is required to enforce inspection on same-subnet LLM traffic.
§4
CLI — Infrastructure & Client Signatures
Signatures 01–08
OpenAI-Compatible API Format (01–04)
# Paste into FortiOS CLIconfig application custom
edit"LM.Studio.Native.API"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"LM Studio native REST API v0 path - /api/v0/ is unique to LM Studio, confirmed no overlap with cloud AI services"setsignature"F-SBID( --name \"LM.Studio.Native.API\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/api/v0/\"; --no_case; --context uri; --weight 50;)"nextedit"LM.Studio.Native.APIv1"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"LM Studio native REST API v1 path - URI + node User-Agent distinguishes from cloud services. False positive confirmed on chat.qwen.ai."setsignature"F-SBID( --name \"LM.Studio.Native.APIv1\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/api/v1/\"; --no_case; --context uri; --pattern \"node\"; --no_case; --context header; --weight 50;)"nextedit"AnythingLLM.API"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"AnythingLLM server - /api/v1/workspace path unique to AnythingLLM"setsignature"F-SBID( --name \"AnythingLLM.API\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/api/v1/workspace\"; --no_case; --context uri; --weight 50;)"nextedit"Local.LLM.OpenAI.Compat"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Catch-all for OpenAI-compatible /v1/chat/completions - covers LM Studio, Ollama, llama.cpp, vLLM, LocalAI, Jan and future servers"setsignature"F-SBID( --name \"Local.LLM.OpenAI.Compat\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --no_case; --context uri; --weight 30;)"nextedit"AnythingLLM.OpenAI.SDK"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"AnythingLLM client - identified by OpenAI/JS User-Agent sent by the OpenAI JavaScript SDK"setsignature"F-SBID( --name \"AnythingLLM.OpenAI.SDK\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"OpenAI/JS\"; --no_case; --context header; --weight 55;)"nextedit"AnythingLLM.NativeAPI"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"AnythingLLM native API - node+cors vs LM Studio polling"setsignature"F-SBID( --name \"AnythingLLM.NativeAPI\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/api/v\"; --no_case; --context uri; --pattern \"node\"; --no_case; --context header; --pattern \"sec-fetch-mode\"; --no_case; --context header; --weight 53;)"nextend
Anthropic-Compatible API Format + Client Identification (05–08)
config application custom
edit"Local.LLM.Anthropic.Compat"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Catch-all for Anthropic Messages API format on local/internal traffic - covers current and future servers implementing /v1/messages"setsignature"F-SBID( --name \"Local.LLM.Anthropic.Compat\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/messages\"; --context uri; --weight 25;)"nextedit"LM.Studio.Anthropic.API"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"LM Studio Anthropic-compatible endpoint accessed by Claude Code - introduced in LM Studio 0.4.1"setsignature"F-SBID( --name \"LM.Studio.Anthropic.API\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/messages\"; --context uri; --pattern \"claude-cli\"; --no_case; --context header; --weight 55;)"nextedit"Client.ClaudeCode"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Claude Code CLI client - User-Agent claude-cli/x.x.x (external, cli) - fires whether connecting to Anthropic cloud or local server"setsignature"F-SBID( --name \"Client.ClaudeCode\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"claude-cli\"; --no_case; --context header; --weight 55;)"nextedit"Client.OpenAI.Python.SDK"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Python OpenAI SDK client - AsyncOpenAI/Python user-agent indicates scripted or programmatic LLM API access"setsignature"F-SBID( --name \"Client.OpenAI.Python.SDK\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"AsyncOpenAI/Python\"; --no_case; --context header; --weight 50;)"nextedit"Client.CherryStudio"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Cherry Studio Electron client - CherryStudio UA"setsignature"F-SBID( --name \"Client.CherryStudio\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"CherryStudio\"; --no_case; --context header; --weight 55;)"nextend
§5
CLI — Base Model Families
Signatures 09–26 · weight 60 (DeepSeek: 65)
# Match on /v1/chat/completions URI + model name in POST bodyconfig application custom
edit"Model.Llama"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Meta Llama family - llama- avoids body text FP"setsignature"F-SBID( --name \"Model.Llama\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"llama-\"; --no_case; --context body; --weight 60;)"nextedit"Model.Mistral"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Mistral AI model family - covers Mistral, Mixtral, Ministral, Magistral, Devstral"setsignature"F-SBID( --name \"Model.Mistral\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"mistral\"; --no_case; --context body; --weight 60;)"nextedit"Model.Phi"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Microsoft Phi family - phi- avoids body text FP"setsignature"F-SBID( --name \"Model.Phi\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"phi-\"; --no_case; --context body; --weight 60;)"nextedit"Model.Gemma"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Google Gemma family - gemma- avoids body text FP"setsignature"F-SBID( --name \"Model.Gemma\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"gemma-\"; --no_case; --context body; --weight 60;)"nextedit"Model.Qwen"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Alibaba Qwen model family"setsignature"F-SBID( --name \"Model.Qwen\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"qwen\"; --no_case; --context body; --weight 60;)"nextedit"Model.DeepSeek"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"DeepSeek AI model family - weight 65 wins over Llama for deepseek-r1-distill-llama variants"setsignature"F-SBID( --name \"Model.DeepSeek\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"deepseek\"; --no_case; --context body; --weight 65;)"nextedit"Model.Nemotron"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"NVIDIA Nemotron model family"setsignature"F-SBID( --name \"Model.Nemotron\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"nemotron\"; --no_case; --context body; --weight 60;)"nextedit"Model.LFM"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Liquid AI LFM model family"setsignature"F-SBID( --name \"Model.LFM\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"lfm\"; --no_case; --context body; --weight 60;)"nextedit"Model.GLM"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Z.ai GLM model family"setsignature"F-SBID( --name \"Model.GLM\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"glm\"; --no_case; --context body; --weight 60;)"nextedit"Model.Granite"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"IBM Granite family - granite- avoids body text FP"setsignature"F-SBID( --name \"Model.Granite\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"granite-\"; --no_case; --context body; --weight 60;)"nextedit"Model.GPT-OSS"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"OpenAI gpt-oss open source model family"setsignature"F-SBID( --name \"Model.GPT-OSS\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"gpt-oss\"; --no_case; --context body; --weight 60;)"nextedit"Model.OLMo"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Allen AI OLMo model family"setsignature"F-SBID( --name \"Model.OLMo\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"olmo\"; --no_case; --context body; --weight 60;)"nextedit"Model.Ernie"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Baidu Ernie family - ernie- avoids body text FP"setsignature"F-SBID( --name \"Model.Ernie\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"ernie-\"; --no_case; --context body; --weight 60;)"nextedit"Model.MiniMax"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"MiniMax model family"setsignature"F-SBID( --name \"Model.MiniMax\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"minimax\"; --no_case; --context body; --weight 60;)"nextedit"Model.Falcon"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"TII Falcon family - falcon- avoids body text FP"setsignature"F-SBID( --name \"Model.Falcon\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"falcon-\"; --no_case; --context body; --weight 60;)"nextedit"Model.Command"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Cohere Command-R family - command-r pattern avoids body text FP"setsignature"F-SBID( --name \"Model.Command\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"command-r\"; --no_case; --context body; --weight 60;)"nextedit"Model.InternLM"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Shanghai AI Lab InternLM model family"setsignature"F-SBID( --name \"Model.InternLM\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"internlm\"; --no_case; --context body; --weight 60;)"nextedit"Model.Solar"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Upstage Solar family - solar- avoids body text FP"setsignature"F-SBID( --name \"Model.Solar\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"solar-\"; --no_case; --context body; --weight 60;)"nextedit"Model.Kimi"setcategory36settechnology2setbehavior9setvendor0setprotocol"TCP"setcomment"Moonshot AI Kimi model family - K2, K2.5, K2.6"setsignature"F-SBID( --name \"Model.Kimi\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"kimi\"; --no_case; --context body; --weight 60;)"nextend
✓ = matched | — = not matched | ✓ partial = matched only if that specific condition is present
§9
FortiGuard Integration Notes
✓
These signatures complement FortiGuard — they do not replace it. The distinction is not simply cloud versus local. FortiGuard signatures may exist for some of these applications but may not match against the specific local communication mechanisms documented here — the URI paths, User-Agent headers, and POST body patterns used by local inference servers and their clients. FortiGuard signatures may also be updated in the future to detect some of these patterns, or local applications may evolve in ways that cause FortiGuard signatures to trigger where they previously did not. This library's value is in the specific identification mechanisms discovered through direct traffic analysis of local LLM infrastructure. Always verify current FortiGuard coverage in your own environment.
Application
FortiGuard Coverage
This Library Adds
Claude / Anthropic
Signatures exist targeting Anthropic API patterns
Local /v1/messages servers and Claude Code client UA not matched by FortiGuard cloud signatures
Ollama
Signatures exist for Ollama
Local Ollama API /v1/chat/completions serving — FortiGuard may not match all local serving patterns
ChatGPT / OpenAI
Signatures exist targeting OpenAI API patterns
Local OpenAI-compatible servers and Python SDK client UA
LM Studio
No FortiGuard signature (local-only app at time of testing)
Full coverage — native API, Anthropic API, model identity
AnythingLLM
No FortiGuard signature (self-hosted at time of testing)
Full coverage — server endpoint and client UA
Model families
Not applicable — FortiGuard does not identify model-level traffic
All 27 model signatures for local deployment
Verification Commands
# Verify all custom signatures saved correctly
show application custom
# Check a specific signature
show application custom "Model.Dolphin"# Verify AIAP GenAI database is current
diagnose autoupdate versions | grep -A 6 GenAI
# View application control log entries
execute log display
§10
Network Architecture Considerations
Layer 2 Traffic and Micro-Segmentation
⚡
Same-subnet traffic bypasses the FortiGate entirely. These signatures only inspect traffic that crosses the FortiGate at layer 3. If the LM Studio server (or any other inference server) and the client making requests sit on the same subnet, their traffic never crosses a layer 3 boundary and the FortiGate will not see it — the signatures will not fire regardless of how the Application Control profile is configured.
To enforce application control on same-subnet LLM traffic, micro-segmentation is required. This means placing inference servers on a dedicated VLAN or subnet that routes through the FortiGate, so that all client traffic — regardless of client location — must cross a layer 3 boundary to reach the inference server.
Scenario
Traffic visible to FortiGate?
Signatures fire?
Client and server on different subnets (routed through FortiGate)
✓ Yes
✓ Yes
Client and server on same subnet, inference server on dedicated VLAN (micro-segmented)
✓ Yes
✓ Yes
Client and server on same flat subnet, no micro-segmentation
— No
— No
Localhost only (LM Studio and client on same machine)
— No
— No (endpoint control required)
Localhost traffic (where both the inference server and the client run on the same machine) is never visible to the FortiGate and requires endpoint-level controls rather than network-level inspection.
§11
Versioning & Update Strategy
Version History
Version
Date
Summary
v1.3.0
2026-05-06
Added Model.Kimi — Moonshot AI Kimi model family (K2, K2.5, K2.6), pattern kimi, weight 60. Total: 39 signatures.
v1.2.0
2026-05-06
Added AnythingLLM.NativeAPI (sig 04b, weight 53) — triple-condition signature identifying AnythingLLM model enumeration via LM Studio native API. Differentiates Node.js 18+ fetch (sec-fetch-mode present) from LM Studio's http-module polling (sec-fetch-mode absent). Fixed Model.Solar false positive (production-confirmed: word "solar" in chat content triggered the sig — pattern tightened to solar-). Proactive hyphen anchoring applied to 6 base model signatures (llama-, phi-, gemma-, granite-, falcon-, ernie-) to reduce body-text false positive risk. Total: 38 signatures.
v1.1.0
2026-05-06
Added Client.CherryStudio — Cherry Studio Electron client confirmed in traffic analysis. Fixed Model.Command false positive (pattern command → command-r, confirmed in production logs). Fixed AnythingLLM.OpenAI.SDK weight 60 → 55 to restore correct weight hierarchy. Total: 37 signatures.
v1.0.1
2026-05-02
Fixed LM.Studio.Native.API false positive on cloud AI web frontends (chat.qwen.ai confirmed). Split into LM.Studio.Native.API (v0 path, unique) and LM.Studio.Native.APIv1 (v1 path + node User-Agent). Total: 36 signatures.
v1.0.0
2026-04-26
Initial release. 35 signatures covering infrastructure, base model families, and fine-tune organizations across OpenAI-compatible and Anthropic-compatible API formats.
When to Update This Library
The LLM ecosystem moves quickly. The following events should trigger a review and potential update of these signatures:
Trigger
Action
Priority
New model family released
Add base model signature (weight 60) if model name is distinct and likely to be deployed locally
Medium
New fine-tune organization emerges
Add fine-tuner signature (weight 62) if org has a consistent naming pattern across their catalog
Medium
False positive confirmed in logs
Tighten the pattern or add a secondary discriminator (URI + header combination). Document in CHANGELOG.
High
New local inference server released
Check URI paths and User-Agent strings; add infrastructure signature if unique patterns exist
Medium
New LLM client tool released
Check User-Agent header; add client identification signature if string is stable and distinctive
Medium
LM Studio version update
Verify /api/v0/ and /api/v1/ paths still apply; check for new API versions introduced
Periodic
FortiGuard signature update
Re-test cloud AI web frontends to verify no new false positives introduced by FortiGuard changes
Periodic
FortiOS upgrade
Re-verify all signatures save correctly and Category 36 remains available on the new version
On upgrade
Recommended Review Sources
ℹ
Monitor these sources to identify when new signatures may be needed: lmstudio.ai/models for new model additions to the LM Studio catalog, huggingface.co/models sorted by downloads for trending locally-deployable models, and the GitHub repository Issues for community-reported false positives and signature requests.
Signatures removed, renamed, or weights significantly restructured
1.x.x → 2.0.0
When updating to a new version, always check the CHANGELOG for any signatures that have been removed or renamed before pasting new CLI blocks, as the FortiGate will error if you attempt to set a pattern on an existing signature without first deleting it.