Skip to content

Commit 049fb6c

Browse files
committed
readme: add high level detail around IPC
High level information about how IPC works and some specifics for IPC3 and IPC4 protocols. Signed-off-by: Liam Girdwood <liam.r.girdwood@linux.intel.com>
1 parent 295fc90 commit 049fb6c

File tree

3 files changed

+317
-0
lines changed

3 files changed

+317
-0
lines changed

src/ipc/README.md

Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
# Inter-Processor Communication (IPC) Core Architecture
2+
3+
This directory contains the common foundation for all Inter-Processor Communication (IPC) within the Sound Open Firmware (SOF) project. It bridges the gap between hardware mailbox interrupts and the version-specific (IPC3/IPC4) message handlers.
4+
5+
## Overview
6+
7+
The Core IPC layer is completely agnostic to the specific structure or content of the messages (whether they are IPC3 stream commands or IPC4 pipeline messages). Its primary responsibilities are:
8+
9+
1. **Message State Management**: Tracking if a message is being processed, queued, or completed.
10+
2. **Interrupt Bridging**: Routing incoming platform interrupts into the Zephyr or SOF thread-domain scheduler.
11+
3. **Queueing**: Safe traversal and delayed processing capabilities via `k_work` items or SOF scheduler tasks.
12+
4. **Platform Acknowledgment**: Signaling the hardware mailbox layers to confirm receipt or signal completion out entirely.
13+
14+
## Architecture Diagram
15+
16+
The basic routing of any IPC message moves from a hardware interrupt, through the platform driver, into the core IPC handlers, and ultimately up to version-specific handlers.
17+
18+
```mermaid
19+
graph TD
20+
Platform[Platform / Mailbox HW] -->|IRQ| CoreIPC[Core IPC Framework]
21+
22+
subgraph CoreIPC [src/ipc/ipc-common.c]
23+
Queue[Msg Queue / Worker Task]
24+
Dispatcher[IPC Message Dispatcher]
25+
PM[Power Management Wait/Wake]
26+
27+
Queue --> Dispatcher
28+
Dispatcher --> PM
29+
end
30+
31+
Dispatcher -->|Version Specific Parsing| IPC3[IPC3 Handler]
32+
Dispatcher -->|Version Specific Parsing| IPC4[IPC4 Handler]
33+
34+
IPC3 -.-> CoreIPC
35+
IPC4 -.-> CoreIPC
36+
CoreIPC -.->|Ack| Platform
37+
```
38+
39+
## Processing Flow
40+
41+
When the host writes a command to the IPC mailbox and triggers an interrupt, the hardware-specific driver (`src/platform/...`) catches the IRQ and eventually calls down into the IPC framework.
42+
43+
Different RTOS environments (Zephyr vs. bare metal SOF native) handle the thread handoff differently. In Zephyr, this leverages the `k_work` queues heavily for `ipc_work_handler`.
44+
45+
### Receiving Messages (Host -> DSP)
46+
47+
```mermaid
48+
sequenceDiagram
49+
participant Host
50+
participant Platform as Platform Mailbox (IRQ)
51+
participant CoreIPC as Core IPC Worker
52+
participant Handler as Version-Specific Handler (IPC3/4)
53+
54+
Host->>Platform: Writes Mailbox, Triggers Interrupt
55+
activate Platform
56+
Platform->>CoreIPC: ipc_schedule_process()
57+
deactivate Platform
58+
59+
Note over CoreIPC: Worker thread wakes up
60+
61+
activate CoreIPC
62+
CoreIPC->>Platform: ipc_platform_wait_ack() (Optional blocking)
63+
CoreIPC->>Handler: version_specific_command_handler()
64+
65+
Handler-->>CoreIPC: Command Processed (Status Header)
66+
CoreIPC->>Platform: ipc_complete_cmd()
67+
Platform-->>Host: Signals Completion Mailbox / IRQ
68+
deactivate CoreIPC
69+
```
70+
71+
### Sending Messages (DSP -> Host)
72+
73+
Firmware-initiated messages (like notifications for position updates, traces, or XRUNs) rely on a queue if the hardware is busy.
74+
75+
```mermaid
76+
sequenceDiagram
77+
participant DSP as DSP Component (e.g. Pipeline Tracker)
78+
participant Queue as IPC Message Queue
79+
participant Platform as Platform Mailbox
80+
81+
DSP->>Queue: ipc_msg_send() / ipc_msg_send_direct()
82+
activate Queue
83+
Queue-->>Queue: Add to Tx list (if BUSY)
84+
Queue->>Platform: Copy payload to mailbox and send
85+
86+
alt If host is ready
87+
Platform-->>Queue: Success
88+
Queue->>Platform: Triggers IRQ to Host
89+
else If host requires delayed ACKs
90+
Queue-->>DSP: Queued pending prior completion
91+
end
92+
deactivate Queue
93+
```
94+
95+
## Global IPC Objects and Helpers
96+
97+
* `ipc_comp_dev`: Wrapper structure linking generic devices (`comp_dev`) specifically to their IPC pipeline and endpoint identifiers.
98+
* `ipc_get_comp_dev` / `ipc_get_ppl_comp`: Lookup assistants utilizing the central graph tracking to find specific components either directly by component ID or by traversing the pipeline graph starting from a given `pipeline_id` and direction (upstream/downstream).

src/ipc/ipc3/README.md

Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
# IPC3 Architecture
2+
3+
This directory houses the Version 3 Inter-Processor Communication handling components. IPC3 is the older, legacy framework structure used extensively across initial Sound Open Firmware releases before the transition to IPC4 compound pipeline commands.
4+
5+
## Overview
6+
7+
The IPC3 architecture treats streaming, DAI configurations, and pipeline management as distinct scalar events. Messages arrive containing a specific `sof_ipc_cmd_hdr` denoting the "Global Message Type" (e.g., Stream, DAI, Trace, PM) and the targeted command within that type.
8+
9+
## Command Structure and Routing
10+
11+
Every message received is placed into an Rx buffer and initially routed to `ipc_cmd()`. Based on the `cmd` inside the `sof_ipc_cmd_hdr`, it delegates to one of the handler subsystems:
12+
13+
* `ipc_glb_stream_message`: Stream/Pipeline configuration and states
14+
* `ipc_glb_dai_message`: DAI parameters and formats
15+
* `ipc_glb_pm_message`: Power Management operations
16+
17+
```mermaid
18+
graph TD
19+
Mailbox[IPC Mailbox Interrupt] --> Valid[mailbox_validate]
20+
Valid --> Disp[IPC Core Dispatcher]
21+
22+
Disp -->|Global Type 1| StreamMsg[ipc_glb_stream_message]
23+
Disp -->|Global Type 2| DAIMsg[ipc_glb_dai_message]
24+
Disp -->|Global Type 3| PMMsg[ipc_glb_pm_message]
25+
Disp -->|Global Type ...| TraceMsg[ipc_glb_trace_message]
26+
27+
subgraph Stream Commands
28+
StreamMsg --> StreamAlloc[ipc_stream_pcm_params]
29+
StreamMsg --> StreamTrig[ipc_stream_trigger]
30+
StreamMsg --> StreamFree[ipc_stream_pcm_free]
31+
StreamMsg --> StreamPos[ipc_stream_position]
32+
end
33+
34+
subgraph DAI Commands
35+
DAIMsg --> DAIConf[ipc_msg_dai_config]
36+
end
37+
38+
subgraph PM Commands
39+
PMMsg --> PMCore[ipc_pm_core_enable]
40+
PMMsg --> PMContext[ipc_pm_context_save / restore]
41+
end
42+
```
43+
44+
## Processing Flows
45+
46+
### Stream Triggering (`ipc_stream_trigger`)
47+
48+
Triggering is strictly hierarchical via IPC3. It expects pipelines built and components fully parsed prior to active streaming commands.
49+
50+
1. **Validation**: The IPC fetches the host component ID.
51+
2. **Device Lookup**: It searches the components list (`ipc_get_comp_dev`) for the PCM device matching the pipeline.
52+
3. **Execution**: If valid, the pipeline graph is crawled recursively and its state altered via `pipeline_trigger`.
53+
54+
```mermaid
55+
sequenceDiagram
56+
participant Host
57+
participant IPC3 as IPC3 Handler (ipc_stream_trigger)
58+
participant Pipe as Pipeline Framework
59+
participant Comp as Connected Component
60+
61+
Host->>IPC3: Send SOF_IPC_STREAM_TRIG_START
62+
activate IPC3
63+
IPC3->>IPC3: ipc_get_comp_dev(stream_id)
64+
IPC3->>Pipe: pipeline_trigger(COMP_TRIGGER_START)
65+
activate Pipe
66+
Pipe->>Comp: pipeline_for_each_comp(COMP_TRIGGER_START)
67+
Comp-->>Pipe: Success (Component ACTIVE)
68+
Pipe-->>IPC3: Return Status
69+
deactivate Pipe
70+
71+
alt If Success
72+
IPC3-->>Host: Acknowledge Success Header
73+
else If Error
74+
IPC3-->>Host: Acknowledge Error Header (EINVAL / EIO)
75+
end
76+
deactivate IPC3
77+
```
78+
79+
### DAI Configuration (`ipc_msg_dai_config`)
80+
81+
DAI (Digital Audio Interface) configuration involves setting up physical I2S, ALH, SSP, or HDA parameters.
82+
83+
1. **Format Unpacking**: Converts the `sof_ipc_dai_config` payload sent from the ALSA driver into an internal DSP structure `ipc_config_dai`.
84+
2. **Device Selection**: Identifies the exact DAI interface and finds its tracking device ID via `dai_get`.
85+
3. **Hardware Config**: Applies the unpacked settings directly to the hardware via the specific DAI driver's `set_config` function.
86+
87+
```mermaid
88+
sequenceDiagram
89+
participant Host
90+
participant IPC3 as IPC3 Handler (ipc_msg_dai_config)
91+
participant DAIDev as DAI Framework (dai_get)
92+
participant HWDriver as HW Specific Driver (e.g. SSP)
93+
94+
Host->>IPC3: Send SOF_IPC_DAI_CONFIG (e.g., SSP1, I2S Format)
95+
activate IPC3
96+
97+
IPC3->>IPC3: build_dai_config()
98+
IPC3->>DAIDev: dai_get(type, index)
99+
DAIDev-->>IPC3: pointer to dai instance
100+
101+
IPC3->>HWDriver: dai_set_config()
102+
activate HWDriver
103+
HWDriver-->>HWDriver: configures registers
104+
HWDriver-->>IPC3: hardware configured
105+
deactivate HWDriver
106+
107+
IPC3-->>Host: Acknowledged Setting
108+
deactivate IPC3
109+
```
110+
111+
## Mailbox and Validation (`mailbox_validate`)
112+
113+
All commands passing through this layer enforce rigid payload boundaries. `mailbox_validate()` reads the first word directly from the mailbox memory, identifying the command type before parsing parameters out of shared RAM to prevent host/DSP mismatches from cascading.

src/ipc/ipc4/README.md

Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
# IPC4 Architecture
2+
3+
This directory holds the handlers and topology parsing logic for Inter-Processor Communication Version 4. IPC4 introduces a significantly denser, compound-command structure heavily based around the concept of "pipelines" and dynamic "modules" rather than static DSP stream roles.
4+
5+
## Overview
6+
7+
Unlike older iterations (IPC3) which trigger single components via scalar commands, IPC4 uses compound structures. A single host interrupt might contain batch operations like building an entire processing chain, setting module parameters sequentially, and triggering a start across multiple interconnected blocks simultaneously.
8+
9+
## Message Handling and Dispatch
10+
11+
IPC4 messages are received via the generic IPC handler entry point `ipc_cmd()`. For IPC4 FW_GEN (global) messages, `ipc_cmd()` dispatches to `ipc4_process_glb_message()`, which then determines if the incoming payload is a true global configuration message or if it's meant to be dispatched to a specific instantiated module.
12+
13+
```mermaid
14+
graph TD
15+
Mailbox[IPC Mailbox Interrupt] --> CoreIPC[ipc_cmd]
16+
17+
CoreIPC --> TypeSel[Decode IPC Message Type]
18+
TypeSel -->|IPC4 FW_GEN| Disp[ipc4_process_glb_message]
19+
20+
Disp -->|Global Message| Global[Global Handler]
21+
Disp -->|Module Message| Mod[Module Handler]
22+
23+
subgraph Global Handler
24+
Global --> NewPipe[ipc4_new_pipeline]
25+
Global --> DelPipe[ipc4_delete_pipeline]
26+
Global --> MemMap[ipc4_process_chain_dma]
27+
Global --> SetPipe[ipc4_set_pipeline_state]
28+
end
29+
30+
subgraph Module Handler
31+
Mod --> InitMod[ipc4_init_module_instance]
32+
Mod --> SetMod[ipc4_set_module_params]
33+
Mod --> GetMod[ipc4_get_module_params]
34+
Mod --> Bind[ipc4_bind]
35+
Mod --> Unbind[ipc4_unbind]
36+
end
37+
```
38+
39+
## Processing Flows
40+
41+
### Pipeline State Management (`ipc4_set_pipeline_state`)
42+
43+
The core driver of graph execution in IPC4 is `ipc4_set_pipeline_state()`. This accepts a multi-stage request (e.g., `START`, `PAUSE`, `RESET`) and coordinates triggering the internal pipelines.
44+
45+
1. **State Translation**: It maps the incoming IPC4 state request to an internal SOF state (e.g., `IPC4_PIPELINE_STATE_RUNNING` -> `COMP_TRIGGER_START`).
46+
2. **Graph Traversal**: It fetches the pipeline object associated with the command and begins preparing it (`ipc4_pipeline_prepare`).
47+
3. **Trigger Execution**: It executes `ipc4_pipeline_trigger()`, recursively changing states across the internal graphs and alerting either the LL scheduler or DP threads.
48+
49+
```mermaid
50+
sequenceDiagram
51+
participant Host
52+
participant IPC4Set as ipc4_set_pipeline_state
53+
participant PPLPrep as ipc4_pipeline_prepare
54+
participant PPLTrig as ipc4_pipeline_trigger
55+
participant Comp as Graph Components
56+
57+
Host->>IPC4Set: IPC4_PIPELINE_STATE_RUNNING
58+
activate IPC4Set
59+
60+
IPC4Set->>PPLPrep: Maps to COMP_TRIGGER_START
61+
PPLPrep->>Comp: Applies PCM params & formatting
62+
Comp-->>PPLPrep: Components ready
63+
64+
IPC4Set->>PPLTrig: execute trigger
65+
PPLTrig->>Comp: pipeline_trigger(COMP_TRIGGER_START)
66+
Comp-->>PPLTrig: Success
67+
68+
IPC4Set-->>Host: Reply: ipc4_send_reply()
69+
deactivate IPC4Set
70+
```
71+
72+
### Module Instantiation and Binding (`ipc4_bind`)
73+
74+
In IPC4, modules (components) are bound together dynamically rather than constructed statically by the firmware at boot time.
75+
76+
1. **Instantiation**: `ipc4_init_module_instance()` allocates the module via the DSP heap arrays based on UUIDs.
77+
2. **Binding**: `ipc4_bind()` takes two module IDs and dynamically connects their sink and source pins using intermediate `comp_buffer` objects.
78+
79+
```mermaid
80+
sequenceDiagram
81+
participant Host
82+
participant IPC4Bind as ipc4_bind
83+
participant SrcMod as Source Module
84+
participant SinkMod as Sink Module
85+
participant Buff as Connection Buffer
86+
87+
Host->>IPC4Bind: Bind Src(ID) -> Sink(ID)
88+
activate IPC4Bind
89+
90+
IPC4Bind->>SrcMod: Locate by ID
91+
IPC4Bind->>SinkMod: Locate by ID
92+
93+
IPC4Bind->>Buff: buffer_new() (Create Intermediate Storage)
94+
95+
IPC4Bind->>SrcMod: Bind source pin to Buff (via comp_bind/comp_buffer_connect)
96+
IPC4Bind->>SinkMod: Bind sink pin to Buff (via comp_bind/comp_buffer_connect)
97+
98+
IPC4Bind-->>Host: Reply: Linked
99+
deactivate IPC4Bind
100+
```
101+
102+
## Compound Messages (`ipc_wait_for_compound_msg`)
103+
104+
To accelerate initialization, IPC4 enables Compound commands. A host can send multiple IPC messages chained back-to-back using a single mailbox trigger flag before waiting for ACKs.
105+
106+
`ipc_compound_pre_start` and `ipc_compound_post_start` manage this batch execution safely without overflowing the Zephyr work queues or breaking hardware configurations during intermediate states.

0 commit comments

Comments
 (0)