§1

Complete Signature Index

#Signature NameCategoryMatch FieldPatternWeightPurpose
01LM.Studio.Native.APIInfraURI/api/v0/50LM Studio native REST API v0 — unique, no cloud overlap
01bLM.Studio.Native.APIv1InfraURI+Header/api/v1/ + node50LM Studio native REST API v1 — anchored by node User-Agent
02AnythingLLM.APIInfraURI/api/v1/workspace50AnythingLLM server endpoint
03Local.LLM.OpenAI.CompatInfraURI/v1/chat/completions30Any OpenAI-format local LLM server
04AnythingLLM.OpenAI.SDKClientHeaderOpenAI/JS55AnythingLLM via OpenAI JS SDK (chat POST flows)
04bAnythingLLM.NativeAPIClientURI+Header×2/api/v + node + sec-fetch-mode53AnythingLLM model enumeration via LM Studio native API
05Local.LLM.Anthropic.CompatInfraURI/v1/messages25Any Anthropic-format local LLM server
06LM.Studio.Anthropic.APIInfraURI+Header/v1/messages + claude-cli55LM Studio Anthropic endpoint (Claude Code path)
07Client.ClaudeCodeClientHeaderclaude-cli55Claude Code CLI client
08Client.OpenAI.Python.SDKClientHeaderAsyncOpenAI/Python50Python openai SDK — scripted access
09Client.CherryStudioClientHeaderCherryStudio55Cherry Studio Electron client — confirmed v1.9.4
10Model.LlamaBase ModelBodyllama-60Meta Llama all variants
10Model.MistralBase ModelBodymistral60Mistral AI family inc. Mixtral, Devstral
11Model.PhiBase ModelBodyphi-60Microsoft Phi-3, Phi-4
12Model.GemmaBase ModelBodygemma-60Google Gemma family
13Model.QwenBase ModelBodyqwen60Alibaba Qwen family
14Model.DeepSeekBase ModelBodydeepseek65DeepSeek AI — elevated weight for distilled models
15Model.NemotronBase ModelBodynemotron60NVIDIA Nemotron
16Model.LFMBase ModelBodylfm60Liquid AI LFM series
17Model.GLMBase ModelBodyglm60Z.ai GLM series
18Model.GraniteBase ModelBodygranite-60IBM Granite
19Model.GPT-OSSBase ModelBodygpt-oss60OpenAI open source models
20Model.OLMoBase ModelBodyolmo60Allen AI OLMo / olmOCR
21Model.ErnieBase ModelBodyernie-60Baidu Ernie
22Model.MiniMaxBase ModelBodyminimax60MiniMax M2
23Model.FalconBase ModelBodyfalcon-60TII Falcon
24Model.CommandBase ModelBodycommand-r60Cohere Command-R, Command-R+, Command-R7B
25Model.InternLMBase ModelBodyinternlm60Shanghai AI Lab InternLM
26Model.SolarBase ModelBodysolar-60Upstage Solar
27Model.KimiBase ModelBodykimi60Moonshot AI Kimi — K2, K2.5, K2.6
28Model.HermesFine-TuneBodyhermes62NousResearch Hermes 2/3/4, OpenHermes
29Model.Dolphin⚠ UncensoredBodydolphin68Eric Hartford Dolphin — all safety refusals removed
30Model.ZephyrFine-TuneBodyzephyr62HuggingFace H4 Zephyr
31Model.OpenChatFine-TuneBodyopenchat62OpenChat Project 3.5/3.6
32Model.WizardFine-TuneBodywizard62Microsoft Research WizardLM / WizardCoder / WizardMath
33Model.VicunaFine-TuneBodyvicuna62UC Berkeley LMSYS Vicuna
34Model.OrcaFine-TuneBodyorca62Microsoft Orca-2 / OpenOrca community
35Model.AiroborosFine-TuneBodyairoboros62jondurbin Airoboros creative/roleplay
36Model.PhindFine-TuneBodyphind62Phind code-specialized CodeLlama
§2

Weight Hierarchy

Priority (highest wins):
68 Dolphin — uncensored
65 DeepSeek — distilled priority
62 Fine-tune orgs
60 Base model families
55 Client signatures
53 AnythingLLM.NativeAPI
50 App/server signatures
30 OpenAI catch-all
25 Anthropic catch-all
How weights interact: When multiple signatures match the same session, FortiOS selects the highest weight. Fine-tuner signatures (62) outrank base model signatures (60) — nous-hermes-2-llama-3.1-8b correctly identifies as Model.Hermes, not Model.Llama. Client signatures (weight 55) yield to base model signatures (weight 60) on the same flow — when a model is identified, the model wins. Client signatures appear in logs for flows where no model signature fires (e.g., an unrecognized model name). AnythingLLM.NativeAPI (weight 53) sits between client and infrastructure sigs — it wins over LM.Studio.Native.APIv1 (50) on model enumeration flows where AnythingLLM's Node.js fetch is the client, while LM Studio's own http-module polling (no sec-fetch-mode) still correctly fires at weight 50. Infrastructure signatures (URI-based, weight 50 or lower) operate on distinct request types and do not compete with model signatures.
§3

Important Notes

SECURITY — Dolphin (weight 68): The Dolphin series explicitly removes all AI content safety refusals. Users running Dolphin locally bypass AI content guardrails entirely. Recommended as the first signature promoted from Monitor to Block.
Body Matching Limitation: The IPS engine does not reassemble HTTP body data across TCP packets. The "model" field appears at the start of every OpenAI-compatible POST body and is reliably in the first packet for normal requests. Verify with packet capture if matching issues arise.
Claude Code uses Anthropic format, not OpenAI format: Claude Code sends requests to /v1/messages, not /v1/chat/completions. Per-model signatures (09–35) only match on /v1/chat/completions. Use Client.ClaudeCode (#07) and LM.Studio.Anthropic.API (#06) to detect Claude Code traffic.
Deep Packet Inspection required for HTTPS: Signatures using --service HTTP automatically cover both HTTP and HTTPS. HTTPS inspection requires an SSL deep inspection profile on the firewall policy.
FortiOS 7.6.4 minimum required for Category 36 (GenAI): Category 36 was introduced in FortiOS 7.6.4. These signatures have been tested and confirmed on 7.6.6. If you are running 7.6.3 or earlier, Category 36 will not be available — substitute the most appropriate category available in your version. The F-SBID signature syntax itself works on earlier FortiOS versions.
FortiGuard native signatures: FortiGuard already covers Ollama, Claude (cloud), ChatGPT, and other cloud AI services. These signatures cover the same API formats on local/internal network traffic. They complement — not replace — FortiGuard signatures.
Application and Filter Overrides — per-application only: After saving signatures, add them to your Application Control profile under Security Profiles → Application Control → Application and Filter Overrides. Overrides operate at the per-application level — multi-select individual signatures and assign an action (Monitor or Block). The GenAI category (36) is already present in the global Application Control settings and does not need to be added as a separate override entry.
Layer 2 traffic — micro-segmentation required: These signatures only inspect traffic that crosses the FortiGate at layer 3. If the inference server and the client are on the same subnet, their traffic will not pass through the FortiGate and these signatures will not fire. Micro-segmentation — placing inference servers on a dedicated VLAN that routes through the FortiGate — is required to enforce inspection on same-subnet LLM traffic.
§4

CLI — Infrastructure & Client Signatures

OpenAI-Compatible API Format (01–04)

# Paste into FortiOS CLI
config application custom

  edit "LM.Studio.Native.API"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "LM Studio native REST API v0 path - /api/v0/ is unique to LM Studio, confirmed no overlap with cloud AI services"
    set signature "F-SBID( --name \"LM.Studio.Native.API\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/api/v0/\"; --no_case; --context uri; --weight 50;)"
  next

  edit "LM.Studio.Native.APIv1"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "LM Studio native REST API v1 path - URI + node User-Agent distinguishes from cloud services. False positive confirmed on chat.qwen.ai."
    set signature "F-SBID( --name \"LM.Studio.Native.APIv1\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/api/v1/\"; --no_case; --context uri; --pattern \"node\"; --no_case; --context header; --weight 50;)"
  next

  edit "AnythingLLM.API"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "AnythingLLM server - /api/v1/workspace path unique to AnythingLLM"
    set signature "F-SBID( --name \"AnythingLLM.API\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/api/v1/workspace\"; --no_case; --context uri; --weight 50;)"
  next

  edit "Local.LLM.OpenAI.Compat"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Catch-all for OpenAI-compatible /v1/chat/completions - covers LM Studio, Ollama, llama.cpp, vLLM, LocalAI, Jan and future servers"
    set signature "F-SBID( --name \"Local.LLM.OpenAI.Compat\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --no_case; --context uri; --weight 30;)"
  next

  edit "AnythingLLM.OpenAI.SDK"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "AnythingLLM client - identified by OpenAI/JS User-Agent sent by the OpenAI JavaScript SDK"
    set signature "F-SBID( --name \"AnythingLLM.OpenAI.SDK\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"OpenAI/JS\"; --no_case; --context header; --weight 55;)"
  next

  edit "AnythingLLM.NativeAPI"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "AnythingLLM native API - node+cors vs LM Studio polling"
    set signature "F-SBID( --name \"AnythingLLM.NativeAPI\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/api/v\"; --no_case; --context uri; --pattern \"node\"; --no_case; --context header; --pattern \"sec-fetch-mode\"; --no_case; --context header; --weight 53;)"
  next

end

Anthropic-Compatible API Format + Client Identification (05–08)

config application custom

  edit "Local.LLM.Anthropic.Compat"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Catch-all for Anthropic Messages API format on local/internal traffic - covers current and future servers implementing /v1/messages"
    set signature "F-SBID( --name \"Local.LLM.Anthropic.Compat\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/messages\"; --context uri; --weight 25;)"
  next

  edit "LM.Studio.Anthropic.API"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "LM Studio Anthropic-compatible endpoint accessed by Claude Code - introduced in LM Studio 0.4.1"
    set signature "F-SBID( --name \"LM.Studio.Anthropic.API\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/messages\"; --context uri; --pattern \"claude-cli\"; --no_case; --context header; --weight 55;)"
  next

  edit "Client.ClaudeCode"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Claude Code CLI client - User-Agent claude-cli/x.x.x (external, cli) - fires whether connecting to Anthropic cloud or local server"
    set signature "F-SBID( --name \"Client.ClaudeCode\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"claude-cli\"; --no_case; --context header; --weight 55;)"
  next

  edit "Client.OpenAI.Python.SDK"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Python OpenAI SDK client - AsyncOpenAI/Python user-agent indicates scripted or programmatic LLM API access"
    set signature "F-SBID( --name \"Client.OpenAI.Python.SDK\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"AsyncOpenAI/Python\"; --no_case; --context header; --weight 50;)"
  next

  edit "Client.CherryStudio"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Cherry Studio Electron client - CherryStudio UA"
    set signature "F-SBID( --name \"Client.CherryStudio\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"CherryStudio\"; --no_case; --context header; --weight 55;)"
  next

end
§5

CLI — Base Model Families

# Match on /v1/chat/completions URI + model name in POST body
config application custom

  edit "Model.Llama"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Meta Llama family - llama- avoids body text FP"
    set signature "F-SBID( --name \"Model.Llama\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"llama-\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.Mistral"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Mistral AI model family - covers Mistral, Mixtral, Ministral, Magistral, Devstral"
    set signature "F-SBID( --name \"Model.Mistral\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"mistral\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.Phi"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Microsoft Phi family - phi- avoids body text FP"
    set signature "F-SBID( --name \"Model.Phi\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"phi-\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.Gemma"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Google Gemma family - gemma- avoids body text FP"
    set signature "F-SBID( --name \"Model.Gemma\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"gemma-\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.Qwen"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Alibaba Qwen model family"
    set signature "F-SBID( --name \"Model.Qwen\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"qwen\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.DeepSeek"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "DeepSeek AI model family - weight 65 wins over Llama for deepseek-r1-distill-llama variants"
    set signature "F-SBID( --name \"Model.DeepSeek\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"deepseek\"; --no_case; --context body; --weight 65;)"
  next

  edit "Model.Nemotron"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "NVIDIA Nemotron model family"
    set signature "F-SBID( --name \"Model.Nemotron\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"nemotron\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.LFM"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Liquid AI LFM model family"
    set signature "F-SBID( --name \"Model.LFM\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"lfm\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.GLM"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Z.ai GLM model family"
    set signature "F-SBID( --name \"Model.GLM\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"glm\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.Granite"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "IBM Granite family - granite- avoids body text FP"
    set signature "F-SBID( --name \"Model.Granite\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"granite-\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.GPT-OSS"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "OpenAI gpt-oss open source model family"
    set signature "F-SBID( --name \"Model.GPT-OSS\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"gpt-oss\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.OLMo"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Allen AI OLMo model family"
    set signature "F-SBID( --name \"Model.OLMo\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"olmo\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.Ernie"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Baidu Ernie family - ernie- avoids body text FP"
    set signature "F-SBID( --name \"Model.Ernie\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"ernie-\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.MiniMax"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "MiniMax model family"
    set signature "F-SBID( --name \"Model.MiniMax\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"minimax\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.Falcon"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "TII Falcon family - falcon- avoids body text FP"
    set signature "F-SBID( --name \"Model.Falcon\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"falcon-\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.Command"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Cohere Command-R family - command-r pattern avoids body text FP"
    set signature "F-SBID( --name \"Model.Command\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"command-r\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.InternLM"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Shanghai AI Lab InternLM model family"
    set signature "F-SBID( --name \"Model.InternLM\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"internlm\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.Solar"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Upstage Solar family - solar- avoids body text FP"
    set signature "F-SBID( --name \"Model.Solar\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"solar-\"; --no_case; --context body; --weight 60;)"
  next

  edit "Model.Kimi"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Moonshot AI Kimi model family - K2, K2.5, K2.6"
    set signature "F-SBID( --name \"Model.Kimi\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"kimi\"; --no_case; --context body; --weight 60;)"
  next

end
§6

CLI — Fine-Tune Organizations

config application custom

  edit "Model.Hermes"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "NousResearch Hermes fine-tune family - Hermes 2/3/4 and OpenHermes on any base model"
    set signature "F-SBID( --name \"Model.Hermes\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"hermes\"; --no_case; --context body; --weight 62;)"
  next

    # SECURITY PRIORITY — weight 68, always wins. Recommend Block after monitor phase.
  edit "Model.Dolphin"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Eric Hartford Dolphin - UNCENSORED. All AI safety refusals removed. Block recommended after monitor phase."
    set signature "F-SBID( --name \"Model.Dolphin\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"dolphin\"; --no_case; --context body; --weight 68;)"
  next

  edit "Model.Zephyr"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "HuggingFace H4 Zephyr DPO fine-tune series"
    set signature "F-SBID( --name \"Model.Zephyr\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"zephyr\"; --no_case; --context body; --weight 62;)"
  next

  edit "Model.OpenChat"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "OpenChat project fine-tune series"
    set signature "F-SBID( --name \"Model.OpenChat\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"openchat\"; --no_case; --context body; --weight 62;)"
  next

  edit "Model.Wizard"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Microsoft Research WizardLM family - WizardLM, WizardCoder, WizardMath"
    set signature "F-SBID( --name \"Model.Wizard\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"wizard\"; --no_case; --context body; --weight 62;)"
  next

  edit "Model.Vicuna"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "UC Berkeley LMSYS Vicuna Llama fine-tune series"
    set signature "F-SBID( --name \"Model.Vicuna\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"vicuna\"; --no_case; --context body; --weight 62;)"
  next

  edit "Model.Orca"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Microsoft Orca and community OpenOrca fine-tune series"
    set signature "F-SBID( --name \"Model.Orca\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"orca\"; --no_case; --context body; --weight 62;)"
  next

  edit "Model.Airoboros"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "jondurbin Airoboros creative and roleplay fine-tune series"
    set signature "F-SBID( --name \"Model.Airoboros\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"airoboros\"; --no_case; --context body; --weight 62;)"
  next

  edit "Model.Phind"
    set category 36
    set technology 2
    set behavior 9
    set vendor 0
    set protocol "TCP"
    set comment "Phind code-specialized CodeLlama fine-tune series"
    set signature "F-SBID( --name \"Model.Phind\"; --service HTTP; --protocol tcp; --flow from_client; --pattern \"/v1/chat/completions\"; --context uri; --pattern \"phind\"; --no_case; --context body; --weight 62;)"
  next

end
§7

Priority Resolution — Common Overlap Scenarios

Example Model IDPatterns MatchWinning SignatureReason
deepseek-r1-distill-llama-8bdeepseek (65), llama (60)Model.DeepSeekWeight 65 beats 60
nous-hermes-2-llama-3.1-8bhermes (62), llama (60)Model.HermesWeight 62 beats 60
dolphin-2.9-llama3-8bdolphin (68), llama (60)Model.DolphinWeight 68 always wins
dolphin-2.5-mixtral-8x7bdolphin (68), mistral (60)Model.DolphinWeight 68 always wins
openhermes-2.5-mistral-7bhermes (62), mistral (60)Model.HermesWeight 62 beats 60
phind-codellama-34b-v2phind (62), llama (60)Model.PhindWeight 62 beats 60
devstral-small-2505mistral (60)Model.Mistral"stral" suffix caught by mistral pattern
nous-capybara-34bnoneNo model matchFalls through to infrastructure sig only
§8

Traffic Coverage Matrix

Traffic ScenarioInfra SigClient SigModel SigFortiGuard
AnythingLLM → LM Studio (HTTP)✓ #01 #03✓ #04✓ #09–35
AnythingLLM → LM Studio (HTTPS+DPI)✓ #01 #03✓ #04✓ #09–35
Claude Code → LM Studio (/v1/messages)✓ #05 #06✓ #07— (Anthropic format)✓ Cloud path
Claude Code → Anthropic cloud✓ #07✓ Native
Python script → local LLM✓ #03✓ #08✓ #09–35
Open WebUI → local LLM✓ #03— (generic UA)✓ #09–35
curl / manual API call✓ #03— (no UA)✓ #09–35
Any client → unknown future /v1/chat server✓ #03✓ if known UA✓ #09–35
Any client → unknown future /v1/messages server✓ #05✓ if known UA— (Anthropic format)
Browser → ChatGPT / Claude.ai✓ Native
Any client → Ollama✓ #03✓ if known UA✓ #09–35✓ Native
Client + server on same subnet (no micro-seg)

✓ = matched  |  — = not matched  |  ✓ partial = matched only if that specific condition is present

§9

FortiGuard Integration Notes

These signatures complement FortiGuard — they do not replace it. The distinction is not simply cloud versus local. FortiGuard signatures may exist for some of these applications but may not match against the specific local communication mechanisms documented here — the URI paths, User-Agent headers, and POST body patterns used by local inference servers and their clients. FortiGuard signatures may also be updated in the future to detect some of these patterns, or local applications may evolve in ways that cause FortiGuard signatures to trigger where they previously did not. This library's value is in the specific identification mechanisms discovered through direct traffic analysis of local LLM infrastructure. Always verify current FortiGuard coverage in your own environment.
ApplicationFortiGuard CoverageThis Library Adds
Claude / AnthropicSignatures exist targeting Anthropic API patternsLocal /v1/messages servers and Claude Code client UA not matched by FortiGuard cloud signatures
OllamaSignatures exist for OllamaLocal Ollama API /v1/chat/completions serving — FortiGuard may not match all local serving patterns
ChatGPT / OpenAISignatures exist targeting OpenAI API patternsLocal OpenAI-compatible servers and Python SDK client UA
LM StudioNo FortiGuard signature (local-only app at time of testing)Full coverage — native API, Anthropic API, model identity
AnythingLLMNo FortiGuard signature (self-hosted at time of testing)Full coverage — server endpoint and client UA
Model familiesNot applicable — FortiGuard does not identify model-level trafficAll 27 model signatures for local deployment

Verification Commands

# Verify all custom signatures saved correctly
show application custom

# Check a specific signature
show application custom "Model.Dolphin"

# Verify AIAP GenAI database is current
diagnose autoupdate versions | grep -A 6 GenAI

# View application control log entries
execute log display
§10

Network Architecture Considerations

Layer 2 Traffic and Micro-Segmentation

Same-subnet traffic bypasses the FortiGate entirely. These signatures only inspect traffic that crosses the FortiGate at layer 3. If the LM Studio server (or any other inference server) and the client making requests sit on the same subnet, their traffic never crosses a layer 3 boundary and the FortiGate will not see it — the signatures will not fire regardless of how the Application Control profile is configured.

To enforce application control on same-subnet LLM traffic, micro-segmentation is required. This means placing inference servers on a dedicated VLAN or subnet that routes through the FortiGate, so that all client traffic — regardless of client location — must cross a layer 3 boundary to reach the inference server.

ScenarioTraffic visible to FortiGate?Signatures fire?
Client and server on different subnets (routed through FortiGate)✓ Yes✓ Yes
Client and server on same subnet, inference server on dedicated VLAN (micro-segmented)✓ Yes✓ Yes
Client and server on same flat subnet, no micro-segmentation— No— No
Localhost only (LM Studio and client on same machine)— No— No (endpoint control required)

Localhost traffic (where both the inference server and the client run on the same machine) is never visible to the FortiGate and requires endpoint-level controls rather than network-level inspection.

§11

Versioning & Update Strategy

Version History

VersionDateSummary
v1.3.02026-05-06Added Model.Kimi — Moonshot AI Kimi model family (K2, K2.5, K2.6), pattern kimi, weight 60. Total: 39 signatures.
v1.2.02026-05-06Added AnythingLLM.NativeAPI (sig 04b, weight 53) — triple-condition signature identifying AnythingLLM model enumeration via LM Studio native API. Differentiates Node.js 18+ fetch (sec-fetch-mode present) from LM Studio's http-module polling (sec-fetch-mode absent). Fixed Model.Solar false positive (production-confirmed: word "solar" in chat content triggered the sig — pattern tightened to solar-). Proactive hyphen anchoring applied to 6 base model signatures (llama-, phi-, gemma-, granite-, falcon-, ernie-) to reduce body-text false positive risk. Total: 38 signatures.
v1.1.02026-05-06Added Client.CherryStudio — Cherry Studio Electron client confirmed in traffic analysis. Fixed Model.Command false positive (pattern commandcommand-r, confirmed in production logs). Fixed AnythingLLM.OpenAI.SDK weight 60 → 55 to restore correct weight hierarchy. Total: 37 signatures.
v1.0.12026-05-02Fixed LM.Studio.Native.API false positive on cloud AI web frontends (chat.qwen.ai confirmed). Split into LM.Studio.Native.API (v0 path, unique) and LM.Studio.Native.APIv1 (v1 path + node User-Agent). Total: 36 signatures.
v1.0.02026-04-26Initial release. 35 signatures covering infrastructure, base model families, and fine-tune organizations across OpenAI-compatible and Anthropic-compatible API formats.

When to Update This Library

The LLM ecosystem moves quickly. The following events should trigger a review and potential update of these signatures:

TriggerActionPriority
New model family releasedAdd base model signature (weight 60) if model name is distinct and likely to be deployed locallyMedium
New fine-tune organization emergesAdd fine-tuner signature (weight 62) if org has a consistent naming pattern across their catalogMedium
False positive confirmed in logsTighten the pattern or add a secondary discriminator (URI + header combination). Document in CHANGELOG.High
New local inference server releasedCheck URI paths and User-Agent strings; add infrastructure signature if unique patterns existMedium
New LLM client tool releasedCheck User-Agent header; add client identification signature if string is stable and distinctiveMedium
LM Studio version updateVerify /api/v0/ and /api/v1/ paths still apply; check for new API versions introducedPeriodic
FortiGuard signature updateRe-test cloud AI web frontends to verify no new false positives introduced by FortiGuard changesPeriodic
FortiOS upgradeRe-verify all signatures save correctly and Category 36 remains available on the new versionOn upgrade

Recommended Review Sources

Monitor these sources to identify when new signatures may be needed: lmstudio.ai/models for new model additions to the LM Studio catalog, huggingface.co/models sorted by downloads for trending locally-deployable models, and the GitHub repository Issues for community-reported false positives and signature requests.

Versioning Convention

This library follows semantic versioning:

Version TypeWhen UsedExample
Patch (x.x.N)Bug fixes, false positive corrections, comment updates1.0.0 → 1.0.1
Minor (x.N.0)New signatures added, no breaking changes1.0.x → 1.1.0
Major (N.0.0)Signatures removed, renamed, or weights significantly restructured1.x.x → 2.0.0

When updating to a new version, always check the CHANGELOG for any signatures that have been removed or renamed before pasting new CLI blocks, as the FortiGate will error if you attempt to set a pattern on an existing signature without first deleting it.