13 Commits

Author SHA1 Message Date
gaoro-xiao d36f6b4bee docs: record v1.1.0 low-RAM TCP backpressure design 2026-05-08 05:53:10 +08:00
gaoro-xiao 2679db4129 fix(uart): gate TCP forwarding by UART TX capacity 2026-05-08 05:52:58 +08:00
gaoro-xiao 245d98f58e fix(tcp): add low-RAM delayed-ack buffering for TCP bridge 2026-05-08 05:52:45 +08:00
gaoro-xiao 5567c7412d docs: sync baud config guidance
Document the AT+BAUD flow in the debug guide, including U0/U1 data-port mapping and the save/reset behavior required for USART2/USART3 baud changes.
2026-04-28 20:28:49 +08:00
gaoro-xiao fbe76bbdd5 build: ignore local Keil build capture
Keep generated build_capture output out of version control so release tags point at a clean workspace state.
2026-04-27 03:43:16 +08:00
gaoro-xiao b0aa9ffc96 fix(ch390): restore recovery after emergency reset
Re-sync the CH390 MAC and force a visible link recycle so TCP links are rebuilt after reset instead of staying half-recovered.
2026-04-25 01:12:42 +08:00
gaoro-xiao 6fbe09eec9 build: update Keil build record for watchdog changes 2026-04-24 05:49:11 +08:00
gaoro-xiao be80b9dcb1 feat(iwdg): enable LED-driven watchdog refresh 2026-04-24 05:48:54 +08:00
gaoro-xiao 5e9b140db8 feat(at): add UART baud AT commands 2026-04-24 05:48:38 +08:00
gaoro-xiao edfcc0991c build: 忽略本地构建与会话临时文件 2026-04-18 23:43:04 +08:00
gaoro-xiao aceacbdba1 build: 同步Keil工程配置与构建记录 2026-04-18 23:21:31 +08:00
gaoro-xiao b107a3169c docs: 补充MUX丢包修复记录与回归结果 2026-04-18 18:48:57 +08:00
gaoro-xiao 495fbe4298 fix(mux): 修复MUX半帧丢失与发送路径静默失败 2026-04-18 18:48:38 +08:00
16 changed files with 1065 additions and 188 deletions
+6
View File
@@ -44,3 +44,9 @@ Desktop.ini
# Local packet captures # Local packet captures
WiresharkLog/ WiresharkLog/
# Local build/session artifacts
.embeddedskills/
uv4_stdout.txt
MDK-ARM/EventRecorderStub.scvd
MDK-ARM/build_capture.txt
+67 -1
View File
@@ -143,6 +143,12 @@ UART 记号约定:
- `U0 = USART2` - `U0 = USART2`
- `U1 = USART3` - `U1 = USART3`
### 7.4 BAUD 默认值
```text
BAUD = U0,115200 / U1,115200
```
## 8. AT 命令定义 ## 8. AT 命令定义
### 8.1 测试设备在线 ### 8.1 测试设备在线
@@ -239,7 +245,45 @@ OK
当MAC设置为全0时,固件将使用硬件MAC地址,此时通过AT+?查询到的MAC地址即为当前生效的硬件MAC地址。 当MAC设置为全0时,固件将使用硬件MAC地址,此时通过AT+?查询到的MAC地址即为当前生效的硬件MAC地址。
### 8.5 LINK 类命令 ### 8.5 BAUD 类命令
#### 查询 UART 波特率
```text
AT+BAUD?\r\n
```
返回示例:
```text
+BAUD:U0=115200,U1=115200
OK
```
#### 设置 UART 波特率
```text
AT+BAUD=U0,115200\r\n
AT+BAUD=U1,38400\r\n
```
字段顺序:
```text
UART,BAUDRATE
```
字段说明:
- `UART``U0/U1`
- `BAUDRATE`:范围 `1200~921600`
说明:
- 该命令只更新当前运行配置记录,不会立即重初始化 `USART2/USART3`
- 执行 `AT+SAVE` 后再执行 `AT+RESET`,重启时按保存值生效
### 8.6 LINK 类命令
#### 设置单条 LINK 记录 #### 设置单条 LINK 记录
@@ -382,6 +426,28 @@ AT+RESET\r\n
1. `AT+SAVE\r\n` 1. `AT+SAVE\r\n`
2. `AT+RESET\r\n` 2. `AT+RESET\r\n`
### 12.3 MUX 模式数据口有丢包
`MUX=1` 下出现“主机侧已发送,但设备对端收到数量明显偏少”的现象,优先按以下顺序检查:
1. 固件版本是否已经包含 `2026-04-18` 的 MUX 丢包修复。
2. MUX 帧是否完整,尤其是:
- `SYNC=0x7E`
- `LEN_H/LEN_L`
- `SRCID`
- `DSTMASK`
- `TAIL=0x7F`
3. 上位机发送方式是否把一帧拆成多个不连续小片段,或在帧间插入无效字节。
4. TCP 对端是否出现拥塞、窗口缩小或应用层不及时取数,导致发送路径出现背压。
5. RTT 中是否存在链路错误、发送失败或持续重连现象。
当前版本的修复点如下:
1. MUX 解析器改为在整帧完整到齐前不推进 UART RX ring 读指针,避免半帧被破坏性消费。
2. TCP 发送路径与 UART 写入路径不再把背压和短写静默视为成功,便于及早暴露链路承载问题。
现场回归结果:在修复后的固件中,MUX 模式持续发送 `670` 包,接收端 `670` 包全部到达,`0` 丢包。
## 13. 相关文件 ## 13. 相关文件
- AT 命令实现:[config.c](/D:/code/STM32Project/TCP2UART/App/config.c) - AT 命令实现:[config.c](/D:/code/STM32Project/TCP2UART/App/config.c)
+44 -2
View File
@@ -365,8 +365,8 @@ static at_result_t handle_summary_query(char *response, uint16_t max_len)
g_config.links[2].enabled, g_config.links[2].local_port, rip_str[2], g_config.links[2].remote_port, link_uart_to_str(g_config.links[2].uart), g_config.links[2].enabled, g_config.links[2].local_port, rip_str[2], g_config.links[2].remote_port, link_uart_to_str(g_config.links[2].uart),
g_config.links[3].enabled, g_config.links[3].local_port, rip_str[3], g_config.links[3].remote_port, link_uart_to_str(g_config.links[3].uart), g_config.links[3].enabled, g_config.links[3].local_port, rip_str[3], g_config.links[3].remote_port, link_uart_to_str(g_config.links[3].uart),
g_config.mux_mode, g_config.mux_mode,
g_config.uart_baudrate[0], (unsigned long)g_config.uart_baudrate[0],
g_config.uart_baudrate[1]); (unsigned long)g_config.uart_baudrate[1]);
return AT_OK; return AT_OK;
} }
@@ -386,6 +386,16 @@ static at_result_t handle_net_query(char *response, uint16_t max_len)
return AT_OK; return AT_OK;
} }
static at_result_t handle_baud_query(char *response, uint16_t max_len)
{
snprintf(response,
max_len,
"+BAUD:U0=%lu,U1=%lu\r\nOK\r\n",
(unsigned long)g_config.uart_baudrate[0],
(unsigned long)g_config.uart_baudrate[1]);
return AT_OK;
}
static at_result_t handle_link_query(uint32_t index, char *response, uint16_t max_len) static at_result_t handle_link_query(uint32_t index, char *response, uint16_t max_len)
{ {
char rip_str[16]; char rip_str[16];
@@ -556,6 +566,38 @@ at_result_t config_process_at_cmd(const char *cmd, char *response, uint16_t max_
snprintf(response, max_len, "OK\r\n"); snprintf(response, max_len, "OK\r\n");
return AT_NEED_REBOOT; return AT_NEED_REBOOT;
} }
if (equals_ignore_case(p, "BAUD?")) {
return handle_baud_query(response, max_len);
}
if (parse_command_with_value(p, "BAUD", &value)) {
char value_copy[32];
char *cursor;
char *token;
uint8_t uart;
uint32_t baudrate;
strncpy(value_copy, value, sizeof(value_copy) - 1u);
value_copy[sizeof(value_copy) - 1u] = '\0';
cursor = value_copy;
token = config_next_token(&cursor);
if (token == NULL || parse_link_uart(token, &uart) != 0) {
snprintf(response, max_len, "ERROR: Invalid route field\r\n");
return AT_INVALID_PARAM;
}
token = config_next_token(&cursor);
if (token == NULL || parse_u32_value(token, 1200u, 921600u, &baudrate) != 0) {
snprintf(response, max_len, "ERROR: Invalid baudrate\r\n");
return AT_INVALID_PARAM;
}
if (config_next_token(&cursor) != NULL) {
snprintf(response, max_len, "ERROR: Invalid value\r\n");
return AT_INVALID_PARAM;
}
g_config.uart_baudrate[uart] = baudrate;
return handle_baud_query(response, max_len) == AT_OK ? AT_NEED_REBOOT : AT_ERROR;
}
if (equals_ignore_case(p, "NET?")) { if (equals_ignore_case(p, "NET?")) {
return handle_net_query(response, max_len); return handle_net_query(response, max_len);
} }
+130 -14
View File
@@ -17,6 +17,8 @@ typedef struct {
uint8_t rx_ring[TCP_CLIENT_RX_BUFFER_SIZE]; uint8_t rx_ring[TCP_CLIENT_RX_BUFFER_SIZE];
uint16_t rx_head; uint16_t rx_head;
uint16_t rx_tail; uint16_t rx_tail;
struct pbuf *hold_pbuf;
uint16_t hold_offset;
uint32_t next_retry_ms; uint32_t next_retry_ms;
uint8_t index; uint8_t index;
tcp_client_instance_config_t config; tcp_client_instance_config_t config;
@@ -30,10 +32,65 @@ static uint16_t ring_free(uint16_t head, uint16_t tail, uint16_t size)
return (head >= tail) ? (uint16_t)(size - head + tail - 1u) : (uint16_t)(tail - head - 1u); return (head >= tail) ? (uint16_t)(size - head + tail - 1u) : (uint16_t)(tail - head - 1u);
} }
static uint16_t ring_used(uint16_t head, uint16_t tail, uint16_t size)
{
return (head >= tail) ? (uint16_t)(head - tail) : (uint16_t)(size - tail + head);
}
static void tcp_client_reset_rx_state(tcp_client_ctx_t *ctx)
{
if (ctx == NULL) {
return;
}
if (ctx->hold_pbuf != NULL) {
pbuf_free(ctx->hold_pbuf);
ctx->hold_pbuf = NULL;
}
ctx->hold_offset = 0u;
ctx->rx_head = 0u;
ctx->rx_tail = 0u;
}
static void tcp_client_fill_ring_from_pbuf(tcp_client_ctx_t *ctx)
{
struct pbuf *q;
uint16_t offset;
if (ctx == NULL || ctx->hold_pbuf == NULL) {
return;
}
q = ctx->hold_pbuf;
offset = ctx->hold_offset;
while (q != NULL && offset >= q->len) {
offset = (uint16_t)(offset - q->len);
q = q->next;
}
while (q != NULL) {
const uint8_t *src = (const uint8_t *)q->payload;
for (uint16_t i = offset; i < q->len; ++i) {
if (ring_free(ctx->rx_head, ctx->rx_tail, TCP_CLIENT_RX_BUFFER_SIZE) == 0u) {
ctx->hold_offset = (uint16_t)(ctx->hold_offset + i - offset);
return;
}
ctx->rx_ring[ctx->rx_head] = src[i];
ctx->rx_head = (uint16_t)((ctx->rx_head + 1u) % TCP_CLIENT_RX_BUFFER_SIZE);
ctx->status.rx_bytes++;
}
ctx->hold_offset = (uint16_t)(ctx->hold_offset + q->len - offset);
offset = 0u;
q = q->next;
}
pbuf_free(ctx->hold_pbuf);
ctx->hold_pbuf = NULL;
ctx->hold_offset = 0u;
}
static err_t tcp_client_on_recv(void *arg, struct tcp_pcb *pcb, struct pbuf *p, err_t err) static err_t tcp_client_on_recv(void *arg, struct tcp_pcb *pcb, struct pbuf *p, err_t err)
{ {
tcp_client_ctx_t *ctx = (tcp_client_ctx_t *)arg; tcp_client_ctx_t *ctx = (tcp_client_ctx_t *)arg;
struct pbuf *q;
if (ctx == NULL) { if (ctx == NULL) {
if (p != NULL) { if (p != NULL) {
@@ -59,21 +116,16 @@ static err_t tcp_client_on_recv(void *arg, struct tcp_pcb *pcb, struct pbuf *p,
return ERR_ABRT; return ERR_ABRT;
} }
for (q = p; q != NULL; q = q->next) { if (ctx->hold_pbuf != NULL) {
const uint8_t *src = (const uint8_t *)q->payload;
for (uint16_t i = 0; i < q->len; ++i) {
if (ring_free(ctx->rx_head, ctx->rx_tail, TCP_CLIENT_RX_BUFFER_SIZE) == 0u) {
ctx->status.errors++; ctx->status.errors++;
break; return ERR_MEM;
}
ctx->rx_ring[ctx->rx_head] = src[i];
ctx->rx_head = (uint16_t)((ctx->rx_head + 1u) % TCP_CLIENT_RX_BUFFER_SIZE);
ctx->status.rx_bytes++;
}
} }
tcp_recved(pcb, p->tot_len); pbuf_ref(p);
ctx->hold_pbuf = p;
ctx->hold_offset = 0u;
pbuf_free(p); pbuf_free(p);
tcp_client_fill_ring_from_pbuf(ctx);
return ERR_OK; return ERR_OK;
} }
@@ -93,6 +145,7 @@ static void tcp_client_on_err(void *arg, err_t err)
if (ctx == NULL) { if (ctx == NULL) {
return; return;
} }
tcp_client_reset_rx_state(ctx);
ctx->pcb = NULL; ctx->pcb = NULL;
ctx->status.state = TCP_CLIENT_STATE_DISCONNECTED; ctx->status.state = TCP_CLIENT_STATE_DISCONNECTED;
ctx->status.errors++; ctx->status.errors++;
@@ -213,6 +266,7 @@ int tcp_client_disconnect(uint8_t instance)
} }
ctx = &g_clients[instance]; ctx = &g_clients[instance];
if (ctx->pcb != NULL) { if (ctx->pcb != NULL) {
tcp_client_reset_rx_state(ctx);
tcp_arg(ctx->pcb, NULL); tcp_arg(ctx->pcb, NULL);
tcp_recv(ctx->pcb, NULL); tcp_recv(ctx->pcb, NULL);
tcp_sent(ctx->pcb, NULL); tcp_sent(ctx->pcb, NULL);
@@ -221,8 +275,7 @@ int tcp_client_disconnect(uint8_t instance)
ctx->pcb = NULL; ctx->pcb = NULL;
} }
ctx->status.state = TCP_CLIENT_STATE_DISCONNECTED; ctx->status.state = TCP_CLIENT_STATE_DISCONNECTED;
ctx->rx_head = 0u; tcp_client_reset_rx_state(ctx);
ctx->rx_tail = 0u;
return 0; return 0;
} }
@@ -239,14 +292,23 @@ int tcp_client_send(uint8_t instance, const uint8_t *data, uint16_t len)
return -1; return -1;
} }
if (tcp_sndbuf(ctx->pcb) < len) { if (tcp_sndbuf(ctx->pcb) < len) {
ctx->status.errors++;
return 0; return 0;
} }
err = tcp_write(ctx->pcb, data, len, TCP_WRITE_FLAG_COPY); err = tcp_write(ctx->pcb, data, len, TCP_WRITE_FLAG_COPY);
if (err == ERR_MEM) {
ctx->status.errors++;
return 0;
}
if (err != ERR_OK) { if (err != ERR_OK) {
ctx->status.errors++; ctx->status.errors++;
return -1; return -1;
} }
err = tcp_output(ctx->pcb); err = tcp_output(ctx->pcb);
if (err == ERR_MEM) {
ctx->status.errors++;
return 0;
}
if (err != ERR_OK) { if (err != ERR_OK) {
ctx->status.errors++; ctx->status.errors++;
return -1; return -1;
@@ -263,13 +325,66 @@ int tcp_client_recv(uint8_t instance, uint8_t *data, uint16_t max_len)
return -1; return -1;
} }
ctx = &g_clients[instance]; ctx = &g_clients[instance];
tcp_client_fill_ring_from_pbuf(ctx);
while (copied < max_len && ctx->rx_tail != ctx->rx_head) { while (copied < max_len && ctx->rx_tail != ctx->rx_head) {
data[copied++] = ctx->rx_ring[ctx->rx_tail]; data[copied++] = ctx->rx_ring[ctx->rx_tail];
ctx->rx_tail = (uint16_t)((ctx->rx_tail + 1u) % TCP_CLIENT_RX_BUFFER_SIZE); ctx->rx_tail = (uint16_t)((ctx->rx_tail + 1u) % TCP_CLIENT_RX_BUFFER_SIZE);
} }
if (copied > 0u && ctx->pcb != NULL) {
tcp_recved(ctx->pcb, copied);
}
return (int)copied; return (int)copied;
} }
uint16_t tcp_client_rx_available(uint8_t instance)
{
if (instance >= TCP_CLIENT_INSTANCE_COUNT) {
return 0u;
}
tcp_client_fill_ring_from_pbuf(&g_clients[instance]);
return ring_used(g_clients[instance].rx_head, g_clients[instance].rx_tail, TCP_CLIENT_RX_BUFFER_SIZE);
}
uint16_t tcp_client_peek(uint8_t instance, uint8_t *data, uint16_t max_len)
{
uint16_t copied = 0u;
uint16_t tail;
tcp_client_ctx_t *ctx;
if (instance >= TCP_CLIENT_INSTANCE_COUNT || data == NULL || max_len == 0u) {
return 0u;
}
ctx = &g_clients[instance];
tcp_client_fill_ring_from_pbuf(ctx);
tail = ctx->rx_tail;
while (copied < max_len && tail != ctx->rx_head) {
data[copied++] = ctx->rx_ring[tail];
tail = (uint16_t)((tail + 1u) % TCP_CLIENT_RX_BUFFER_SIZE);
}
return copied;
}
void tcp_client_drop(uint8_t instance, uint16_t len)
{
tcp_client_ctx_t *ctx;
uint16_t dropped = 0u;
if (instance >= TCP_CLIENT_INSTANCE_COUNT || len == 0u) {
return;
}
ctx = &g_clients[instance];
while (dropped < len && ctx->rx_tail != ctx->rx_head) {
ctx->rx_tail = (uint16_t)((ctx->rx_tail + 1u) % TCP_CLIENT_RX_BUFFER_SIZE);
dropped++;
}
if (dropped > 0u && ctx->pcb != NULL) {
tcp_recved(ctx->pcb, dropped);
}
tcp_client_fill_ring_from_pbuf(ctx);
}
bool tcp_client_is_connected(uint8_t instance) bool tcp_client_is_connected(uint8_t instance)
{ {
return (instance < TCP_CLIENT_INSTANCE_COUNT) && return (instance < TCP_CLIENT_INSTANCE_COUNT) &&
@@ -290,6 +405,7 @@ void tcp_client_poll(void)
for (uint8_t i = 0; i < TCP_CLIENT_INSTANCE_COUNT; ++i) { for (uint8_t i = 0; i < TCP_CLIENT_INSTANCE_COUNT; ++i) {
tcp_client_ctx_t *ctx = &g_clients[i]; tcp_client_ctx_t *ctx = &g_clients[i];
tcp_client_fill_ring_from_pbuf(ctx);
if (!ctx->config.enabled || !ctx->config.auto_reconnect || tcp_client_is_connected(i)) { if (!ctx->config.enabled || !ctx->config.auto_reconnect || tcp_client_is_connected(i)) {
continue; continue;
} }
+4 -1
View File
@@ -14,7 +14,7 @@ extern "C" {
#endif #endif
#define TCP_CLIENT_INSTANCE_COUNT 2u #define TCP_CLIENT_INSTANCE_COUNT 2u
#define TCP_CLIENT_RX_BUFFER_SIZE 512u #define TCP_CLIENT_RX_BUFFER_SIZE 480u
#define TCP_CLIENT_RECONNECT_DELAY_MS 3000u #define TCP_CLIENT_RECONNECT_DELAY_MS 3000u
typedef enum { typedef enum {
@@ -48,6 +48,9 @@ int tcp_client_connect(uint8_t instance);
int tcp_client_disconnect(uint8_t instance); int tcp_client_disconnect(uint8_t instance);
int tcp_client_send(uint8_t instance, const uint8_t *data, uint16_t len); int tcp_client_send(uint8_t instance, const uint8_t *data, uint16_t len);
int tcp_client_recv(uint8_t instance, uint8_t *data, uint16_t max_len); int tcp_client_recv(uint8_t instance, uint8_t *data, uint16_t max_len);
uint16_t tcp_client_rx_available(uint8_t instance);
uint16_t tcp_client_peek(uint8_t instance, uint8_t *data, uint16_t max_len);
void tcp_client_drop(uint8_t instance, uint16_t len);
bool tcp_client_is_connected(uint8_t instance); bool tcp_client_is_connected(uint8_t instance);
void tcp_client_get_status(uint8_t instance, tcp_client_status_t *status); void tcp_client_get_status(uint8_t instance, tcp_client_status_t *status);
void tcp_client_poll(void); void tcp_client_poll(void);
+136 -14
View File
@@ -18,6 +18,8 @@ typedef struct {
uint8_t rx_ring[TCP_SERVER_RX_BUFFER_SIZE]; uint8_t rx_ring[TCP_SERVER_RX_BUFFER_SIZE];
uint16_t rx_head; uint16_t rx_head;
uint16_t rx_tail; uint16_t rx_tail;
struct pbuf *hold_pbuf;
uint16_t hold_offset;
uint8_t index; uint8_t index;
tcp_server_instance_config_t config; tcp_server_instance_config_t config;
tcp_server_status_t status; tcp_server_status_t status;
@@ -30,10 +32,65 @@ static uint16_t ring_free(uint16_t head, uint16_t tail, uint16_t size)
return (head >= tail) ? (uint16_t)(size - head + tail - 1u) : (uint16_t)(tail - head - 1u); return (head >= tail) ? (uint16_t)(size - head + tail - 1u) : (uint16_t)(tail - head - 1u);
} }
static uint16_t ring_used(uint16_t head, uint16_t tail, uint16_t size)
{
return (head >= tail) ? (uint16_t)(head - tail) : (uint16_t)(size - tail + head);
}
static void tcp_server_reset_rx_state(tcp_server_ctx_t *ctx)
{
if (ctx == NULL) {
return;
}
if (ctx->hold_pbuf != NULL) {
pbuf_free(ctx->hold_pbuf);
ctx->hold_pbuf = NULL;
}
ctx->hold_offset = 0u;
ctx->rx_head = 0u;
ctx->rx_tail = 0u;
}
static void tcp_server_fill_ring_from_pbuf(tcp_server_ctx_t *ctx)
{
struct pbuf *q;
uint16_t offset;
if (ctx == NULL || ctx->hold_pbuf == NULL) {
return;
}
q = ctx->hold_pbuf;
offset = ctx->hold_offset;
while (q != NULL && offset >= q->len) {
offset = (uint16_t)(offset - q->len);
q = q->next;
}
while (q != NULL) {
const uint8_t *src = (const uint8_t *)q->payload;
for (uint16_t i = offset; i < q->len; ++i) {
if (ring_free(ctx->rx_head, ctx->rx_tail, TCP_SERVER_RX_BUFFER_SIZE) == 0u) {
ctx->hold_offset = (uint16_t)(ctx->hold_offset + i - offset);
return;
}
ctx->rx_ring[ctx->rx_head] = src[i];
ctx->rx_head = (uint16_t)((ctx->rx_head + 1u) % TCP_SERVER_RX_BUFFER_SIZE);
ctx->status.rx_bytes++;
}
ctx->hold_offset = (uint16_t)(ctx->hold_offset + q->len - offset);
offset = 0u;
q = q->next;
}
pbuf_free(ctx->hold_pbuf);
ctx->hold_pbuf = NULL;
ctx->hold_offset = 0u;
}
static err_t tcp_server_on_recv(void *arg, struct tcp_pcb *pcb, struct pbuf *p, err_t err) static err_t tcp_server_on_recv(void *arg, struct tcp_pcb *pcb, struct pbuf *p, err_t err)
{ {
tcp_server_ctx_t *ctx = (tcp_server_ctx_t *)arg; tcp_server_ctx_t *ctx = (tcp_server_ctx_t *)arg;
struct pbuf *q;
if (ctx == NULL) { if (ctx == NULL) {
if (p != NULL) { if (p != NULL) {
@@ -58,21 +115,16 @@ static err_t tcp_server_on_recv(void *arg, struct tcp_pcb *pcb, struct pbuf *p,
return ERR_ABRT; return ERR_ABRT;
} }
for (q = p; q != NULL; q = q->next) { if (ctx->hold_pbuf != NULL) {
const uint8_t *src = (const uint8_t *)q->payload;
for (uint16_t i = 0; i < q->len; ++i) {
if (ring_free(ctx->rx_head, ctx->rx_tail, TCP_SERVER_RX_BUFFER_SIZE) == 0u) {
ctx->status.errors++; ctx->status.errors++;
break; return ERR_MEM;
}
ctx->rx_ring[ctx->rx_head] = src[i];
ctx->rx_head = (uint16_t)((ctx->rx_head + 1u) % TCP_SERVER_RX_BUFFER_SIZE);
ctx->status.rx_bytes++;
}
} }
tcp_recved(pcb, p->tot_len); pbuf_ref(p);
ctx->hold_pbuf = p;
ctx->hold_offset = 0u;
pbuf_free(p); pbuf_free(p);
tcp_server_fill_ring_from_pbuf(ctx);
return ERR_OK; return ERR_OK;
} }
@@ -92,6 +144,7 @@ static void tcp_server_on_err(void *arg, err_t err)
if (ctx == NULL) { if (ctx == NULL) {
return; return;
} }
tcp_server_reset_rx_state(ctx);
ctx->client_pcb = NULL; ctx->client_pcb = NULL;
ctx->status.state = ctx->config.enabled ? TCP_SERVER_STATE_LISTENING : TCP_SERVER_STATE_IDLE; ctx->status.state = ctx->config.enabled ? TCP_SERVER_STATE_LISTENING : TCP_SERVER_STATE_IDLE;
ctx->status.errors++; ctx->status.errors++;
@@ -193,6 +246,7 @@ int tcp_server_stop(uint8_t instance)
ctx = &g_servers[instance]; ctx = &g_servers[instance];
if (ctx->client_pcb != NULL) { if (ctx->client_pcb != NULL) {
tcp_server_reset_rx_state(ctx);
tcp_arg(ctx->client_pcb, NULL); tcp_arg(ctx->client_pcb, NULL);
tcp_recv(ctx->client_pcb, NULL); tcp_recv(ctx->client_pcb, NULL);
tcp_sent(ctx->client_pcb, NULL); tcp_sent(ctx->client_pcb, NULL);
@@ -210,8 +264,7 @@ int tcp_server_stop(uint8_t instance)
} }
ctx->status.state = TCP_SERVER_STATE_IDLE; ctx->status.state = TCP_SERVER_STATE_IDLE;
ctx->rx_head = 0u; tcp_server_reset_rx_state(ctx);
ctx->rx_tail = 0u;
return 0; return 0;
} }
@@ -228,15 +281,24 @@ int tcp_server_send(uint8_t instance, const uint8_t *data, uint16_t len)
return -1; return -1;
} }
if (tcp_sndbuf(ctx->client_pcb) < len) { if (tcp_sndbuf(ctx->client_pcb) < len) {
ctx->status.errors++;
return 0; return 0;
} }
err = tcp_write(ctx->client_pcb, data, len, TCP_WRITE_FLAG_COPY); err = tcp_write(ctx->client_pcb, data, len, TCP_WRITE_FLAG_COPY);
if (err == ERR_MEM) {
ctx->status.errors++;
return 0;
}
if (err != ERR_OK) { if (err != ERR_OK) {
ctx->status.errors++; ctx->status.errors++;
return -1; return -1;
} }
err = tcp_output(ctx->client_pcb); err = tcp_output(ctx->client_pcb);
if (err == ERR_MEM) {
ctx->status.errors++;
return 0;
}
if (err != ERR_OK) { if (err != ERR_OK) {
ctx->status.errors++; ctx->status.errors++;
return -1; return -1;
@@ -253,13 +315,66 @@ int tcp_server_recv(uint8_t instance, uint8_t *data, uint16_t max_len)
return -1; return -1;
} }
ctx = &g_servers[instance]; ctx = &g_servers[instance];
tcp_server_fill_ring_from_pbuf(ctx);
while (copied < max_len && ctx->rx_tail != ctx->rx_head) { while (copied < max_len && ctx->rx_tail != ctx->rx_head) {
data[copied++] = ctx->rx_ring[ctx->rx_tail]; data[copied++] = ctx->rx_ring[ctx->rx_tail];
ctx->rx_tail = (uint16_t)((ctx->rx_tail + 1u) % TCP_SERVER_RX_BUFFER_SIZE); ctx->rx_tail = (uint16_t)((ctx->rx_tail + 1u) % TCP_SERVER_RX_BUFFER_SIZE);
} }
if (copied > 0u && ctx->client_pcb != NULL) {
tcp_recved(ctx->client_pcb, copied);
}
return (int)copied; return (int)copied;
} }
uint16_t tcp_server_rx_available(uint8_t instance)
{
if (instance >= TCP_SERVER_INSTANCE_COUNT) {
return 0u;
}
tcp_server_fill_ring_from_pbuf(&g_servers[instance]);
return ring_used(g_servers[instance].rx_head, g_servers[instance].rx_tail, TCP_SERVER_RX_BUFFER_SIZE);
}
uint16_t tcp_server_peek(uint8_t instance, uint8_t *data, uint16_t max_len)
{
uint16_t copied = 0u;
uint16_t tail;
tcp_server_ctx_t *ctx;
if (instance >= TCP_SERVER_INSTANCE_COUNT || data == NULL || max_len == 0u) {
return 0u;
}
ctx = &g_servers[instance];
tcp_server_fill_ring_from_pbuf(ctx);
tail = ctx->rx_tail;
while (copied < max_len && tail != ctx->rx_head) {
data[copied++] = ctx->rx_ring[tail];
tail = (uint16_t)((tail + 1u) % TCP_SERVER_RX_BUFFER_SIZE);
}
return copied;
}
void tcp_server_drop(uint8_t instance, uint16_t len)
{
tcp_server_ctx_t *ctx;
uint16_t dropped = 0u;
if (instance >= TCP_SERVER_INSTANCE_COUNT || len == 0u) {
return;
}
ctx = &g_servers[instance];
while (dropped < len && ctx->rx_tail != ctx->rx_head) {
ctx->rx_tail = (uint16_t)((ctx->rx_tail + 1u) % TCP_SERVER_RX_BUFFER_SIZE);
dropped++;
}
if (dropped > 0u && ctx->client_pcb != NULL) {
tcp_recved(ctx->client_pcb, dropped);
}
tcp_server_fill_ring_from_pbuf(ctx);
}
bool tcp_server_is_connected(uint8_t instance) bool tcp_server_is_connected(uint8_t instance)
{ {
return (instance < TCP_SERVER_INSTANCE_COUNT) && (g_servers[instance].client_pcb != NULL); return (instance < TCP_SERVER_INSTANCE_COUNT) && (g_servers[instance].client_pcb != NULL);
@@ -271,3 +386,10 @@ void tcp_server_get_status(uint8_t instance, tcp_server_status_t *status)
*status = g_servers[instance].status; *status = g_servers[instance].status;
} }
} }
void tcp_server_poll(void)
{
for (uint8_t i = 0; i < TCP_SERVER_INSTANCE_COUNT; ++i) {
tcp_server_fill_ring_from_pbuf(&g_servers[i]);
}
}
+5 -1
View File
@@ -14,7 +14,7 @@ extern "C" {
#endif #endif
#define TCP_SERVER_INSTANCE_COUNT 2u #define TCP_SERVER_INSTANCE_COUNT 2u
#define TCP_SERVER_RX_BUFFER_SIZE 512u #define TCP_SERVER_RX_BUFFER_SIZE 480u
typedef enum { typedef enum {
TCP_SERVER_STATE_IDLE = 0, TCP_SERVER_STATE_IDLE = 0,
@@ -42,8 +42,12 @@ int tcp_server_start(uint8_t instance);
int tcp_server_stop(uint8_t instance); int tcp_server_stop(uint8_t instance);
int tcp_server_send(uint8_t instance, const uint8_t *data, uint16_t len); int tcp_server_send(uint8_t instance, const uint8_t *data, uint16_t len);
int tcp_server_recv(uint8_t instance, uint8_t *data, uint16_t max_len); int tcp_server_recv(uint8_t instance, uint8_t *data, uint16_t max_len);
uint16_t tcp_server_rx_available(uint8_t instance);
uint16_t tcp_server_peek(uint8_t instance, uint8_t *data, uint16_t max_len);
void tcp_server_drop(uint8_t instance, uint16_t len);
bool tcp_server_is_connected(uint8_t instance); bool tcp_server_is_connected(uint8_t instance);
void tcp_server_get_status(uint8_t instance, tcp_server_status_t *status); void tcp_server_get_status(uint8_t instance, tcp_server_status_t *status);
void tcp_server_poll(void);
#ifdef __cplusplus #ifdef __cplusplus
} }
+95 -17
View File
@@ -43,6 +43,52 @@ static uint16_t ring_free(uint16_t head, uint16_t tail, uint16_t size)
return (uint16_t)(size - ring_used(head, tail, size) - 1u); return (uint16_t)(size - ring_used(head, tail, size) - 1u);
} }
static bool ring_peek_byte(const uart_channel_ctx_t *ctx, uint16_t offset, uint8_t *out)
{
uint16_t head;
uint16_t tail;
if (ctx == NULL || out == NULL) {
return false;
}
head = ctx->rx_head;
tail = ctx->rx_tail;
if (offset >= ring_used(head, tail, UART_RX_RING_BUFFER_SIZE)) {
return false;
}
*out = ctx->rx_ring[(tail + offset) % UART_RX_RING_BUFFER_SIZE];
return true;
}
static bool ring_peek_span(const uart_channel_ctx_t *ctx, uint16_t offset, uint8_t *data, uint16_t len)
{
if (ctx == NULL || data == NULL) {
return false;
}
for (uint16_t i = 0u; i < len; ++i) {
if (!ring_peek_byte(ctx, (uint16_t)(offset + i), &data[i])) {
return false;
}
}
return true;
}
static void ring_drop_bytes(uart_channel_ctx_t *ctx, uint16_t len)
{
if (ctx == NULL) {
return;
}
while (len > 0u && ctx->rx_tail != ctx->rx_head) {
ctx->rx_tail = (uint16_t)((ctx->rx_tail + 1u) % UART_RX_RING_BUFFER_SIZE);
--len;
}
}
static int apply_uart_config(uart_channel_t channel) static int apply_uart_config(uart_channel_t channel)
{ {
uart_channel_ctx_t *ctx = &g_channels[channel]; uart_channel_ctx_t *ctx = &g_channels[channel];
@@ -235,6 +281,14 @@ uint16_t uart_trans_write(uart_channel_t channel, const uint8_t *data, uint16_t
return written; return written;
} }
uint16_t uart_trans_tx_free(uart_channel_t channel)
{
if (channel >= UART_CHANNEL_MAX) {
return 0u;
}
return ring_free(g_channels[channel].tx_head, g_channels[channel].tx_tail, UART_TX_RING_BUFFER_SIZE);
}
void uart_trans_get_stats(uart_channel_t channel, uart_stats_t *stats) void uart_trans_get_stats(uart_channel_t channel, uart_stats_t *stats)
{ {
if (channel < UART_CHANNEL_MAX && stats != NULL) { if (channel < UART_CHANNEL_MAX && stats != NULL) {
@@ -297,64 +351,88 @@ void uart_trans_tx_cplt_handler(uart_channel_t channel)
bool uart_mux_try_extract_frame(uart_channel_t channel, uart_mux_frame_t *frame) bool uart_mux_try_extract_frame(uart_channel_t channel, uart_mux_frame_t *frame)
{ {
uint8_t sync_byte; uart_channel_ctx_t *ctx;
uint8_t header[4]; uint8_t header[4];
uint8_t tail_byte;
uint16_t available; uint16_t available;
uint16_t payload_len; uint16_t payload_len;
uint16_t sync_offset;
uint16_t total_len;
if (channel >= UART_CHANNEL_MAX || frame == NULL) { if (channel >= UART_CHANNEL_MAX || frame == NULL) {
return false; return false;
} }
ctx = &g_channels[channel];
for (;;) {
available = uart_trans_rx_available(channel); available = uart_trans_rx_available(channel);
if (available < 6u) { if (available < 6u) {
return false; return false;
} }
/* Scan for SYNC byte (0x7E) — discard non-matching bytes one at a time */ sync_offset = available;
if (uart_trans_read(channel, &sync_byte, 1u) != 1u) { for (uint16_t i = 0u; i < available; ++i) {
uint8_t byte = 0u;
if (!ring_peek_byte(ctx, i, &byte)) {
return false; return false;
} }
if (sync_byte != UART_MUX_SYNC) { if (byte == UART_MUX_SYNC) {
sync_offset = i;
break;
}
}
if (sync_offset == available) {
ring_drop_bytes(ctx, available);
return false; return false;
} }
/* Need at least: 2(len) + 1(src) + 1(dst) + payload + 1(tail) = 5 + payload */ if (sync_offset > 0u) {
available = uart_trans_rx_available(channel); ring_drop_bytes(ctx, sync_offset);
if (available < 4u) { available = (uint16_t)(available - sync_offset);
}
if (available < 6u) {
return false; return false;
} }
if (uart_trans_read(channel, header, sizeof(header)) != sizeof(header)) { if (!ring_peek_span(ctx, 1u, header, sizeof(header))) {
return false; return false;
} }
payload_len = (uint16_t)(((uint16_t)header[0] << 8) | header[1]); payload_len = (uint16_t)(((uint16_t)header[0] << 8) | header[1]);
if (payload_len > sizeof(frame->payload)) { if (payload_len > sizeof(frame->payload)) {
ring_drop_bytes(ctx, 1u);
continue;
}
total_len = (uint16_t)(payload_len + 6u);
if (available < total_len) {
return false; return false;
} }
if (uart_trans_rx_available(channel) < (uint16_t)(payload_len + 1u)) {
if (!ring_peek_byte(ctx, (uint16_t)(total_len - 1u), &tail_byte)) {
return false; return false;
} }
if (tail_byte != UART_MUX_TAIL) {
ring_drop_bytes(ctx, 1u);
continue;
}
frame->src_id = header[2]; frame->src_id = header[2];
frame->dst_mask = header[3]; frame->dst_mask = header[3];
frame->payload_len = payload_len; frame->payload_len = payload_len;
if (payload_len > 0u) { if (payload_len > 0u) {
if (uart_trans_read(channel, frame->payload, payload_len) != payload_len) { if (!ring_peek_span(ctx, 5u, frame->payload, payload_len)) {
return false;
}
}
{
uint8_t tail = 0u;
if (uart_trans_read(channel, &tail, 1u) != 1u || tail != UART_MUX_TAIL) {
return false; return false;
} }
} }
ring_drop_bytes(ctx, total_len);
return true; return true;
} }
}
bool uart_mux_encode_frame(uint8_t src_id, bool uart_mux_encode_frame(uint8_t src_id,
uint8_t dst_mask, uint8_t dst_mask,
+1
View File
@@ -55,6 +55,7 @@ void uart_trans_poll(void);
uint16_t uart_trans_rx_available(uart_channel_t channel); uint16_t uart_trans_rx_available(uart_channel_t channel);
uint16_t uart_trans_read(uart_channel_t channel, uint8_t *data, uint16_t max_len); uint16_t uart_trans_read(uart_channel_t channel, uint8_t *data, uint16_t max_len);
uint16_t uart_trans_write(uart_channel_t channel, const uint8_t *data, uint16_t len); uint16_t uart_trans_write(uart_channel_t channel, const uint8_t *data, uint16_t len);
uint16_t uart_trans_tx_free(uart_channel_t channel);
void uart_trans_get_stats(uart_channel_t channel, uart_stats_t *stats); void uart_trans_get_stats(uart_channel_t channel, uart_stats_t *stats);
void uart_trans_reset_stats(uart_channel_t channel); void uart_trans_reset_stats(uart_channel_t channel);
void uart_trans_idle_handler(uart_channel_t channel); void uart_trans_idle_handler(uart_channel_t channel);
+1 -1
View File
@@ -38,7 +38,7 @@ void MX_IWDG_Init(void)
/* USER CODE END IWDG_Init 1 */ /* USER CODE END IWDG_Init 1 */
hiwdg.Instance = IWDG; hiwdg.Instance = IWDG;
hiwdg.Init.Prescaler = IWDG_PRESCALER_4; hiwdg.Init.Prescaler = IWDG_PRESCALER_64;
hiwdg.Init.Reload = 4095; hiwdg.Init.Reload = 4095;
if (HAL_IWDG_Init(&hiwdg) != HAL_OK) if (HAL_IWDG_Init(&hiwdg) != HAL_OK)
{ {
+232 -44
View File
@@ -38,6 +38,7 @@
#define LED_PIN GPIO_PIN_13 #define LED_PIN GPIO_PIN_13
#define LED_PORT GPIOC #define LED_PORT GPIOC
#define APP_ROUTE_BUFFER_SIZE 256u #define APP_ROUTE_BUFFER_SIZE 256u
#define APP_TCP_TO_UART_CHUNK_SIZE 128u
#define STACK_GUARD_WORD 0xA5A5A5A5u #define STACK_GUARD_WORD 0xA5A5A5A5u
#define APP_HEALTH_CHECK_INTERVAL_MS 5000u #define APP_HEALTH_CHECK_INTERVAL_MS 5000u
/* USER CODE END PD */ /* USER CODE END PD */
@@ -66,7 +67,11 @@ static void App_RouteMuxUartTraffic(void);
static void App_RouteTcpTraffic(void); static void App_RouteTcpTraffic(void);
static void StackGuard_Init(void); static void StackGuard_Init(void);
static void StackGuard_Check(void); static void StackGuard_Check(void);
static void App_SendToUart(uint8_t uart_index, uint8_t src_id, uint8_t dst_mask, const uint8_t *data, uint16_t len); static bool App_SendToUart(uint8_t uart_index, uint8_t src_id, uint8_t dst_mask, const uint8_t *data, uint16_t len);
static uint16_t App_SendTcpPayloadToUartRaw(uint8_t uart_index, const uint8_t *data, uint16_t len);
static bool App_SendTcpPayloadToUartMux(uint8_t uart_index, uint8_t src_id, uint8_t dst_mask, const uint8_t *data, uint16_t len);
static bool App_SendTcpServerPayload(uint8_t instance, const uint8_t *data, uint16_t len);
static bool App_SendTcpClientPayload(uint8_t instance, const uint8_t *data, uint16_t len);
/* USER CODE END PFP */ /* USER CODE END PFP */
/* Private user code ---------------------------------------------------------*/ /* Private user code ---------------------------------------------------------*/
@@ -115,6 +120,9 @@ void HAL_TIM_PeriodElapsedCallback(TIM_HandleTypeDef *htim)
if (g_led_blink_ticks >= 1000u) { if (g_led_blink_ticks >= 1000u) {
g_led_blink_ticks = 0u; g_led_blink_ticks = 0u;
LED_Toggle(); LED_Toggle();
if (hiwdg.Instance == IWDG) {
HAL_IWDG_Refresh(&hiwdg);
}
} }
} }
} }
@@ -261,48 +269,190 @@ static void App_Init(void)
} }
} }
static void App_SendToUart(uint8_t uart_index, uint8_t src_id, uint8_t dst_mask, const uint8_t *data, uint16_t len) static bool App_SendTcpServerPayload(uint8_t instance, const uint8_t *data, uint16_t len)
{
return tcp_server_send(instance, data, len) == (int)len;
}
static bool App_SendTcpClientPayload(uint8_t instance, const uint8_t *data, uint16_t len)
{
return tcp_client_send(instance, data, len) == (int)len;
}
static bool App_SendToUart(uint8_t uart_index, uint8_t src_id, uint8_t dst_mask, const uint8_t *data, uint16_t len)
{ {
const device_config_t *cfg = config_get(); const device_config_t *cfg = config_get();
uart_channel_t channel = (uart_index == LINK_UART_U1) ? UART_CHANNEL_U1 : UART_CHANNEL_U0; uart_channel_t channel = (uart_index == LINK_UART_U1) ? UART_CHANNEL_U1 : UART_CHANNEL_U0;
uint16_t written;
if (cfg->mux_mode == MUX_MODE_FRAME) { if (cfg->mux_mode == MUX_MODE_FRAME) {
uint8_t frame[APP_ROUTE_BUFFER_SIZE + 6u]; uint8_t frame[APP_ROUTE_BUFFER_SIZE + 6u];
uint16_t frame_len = 0u; uint16_t frame_len = 0u;
if (uart_mux_encode_frame(src_id, dst_mask, data, len, frame, &frame_len, sizeof(frame))) { if (uart_mux_encode_frame(src_id, dst_mask, data, len, frame, &frame_len, sizeof(frame))) {
(void)uart_trans_write(channel, frame, frame_len); written = uart_trans_write(channel, frame, frame_len);
return written == frame_len;
} }
return false;
} else { } else {
(void)uart_trans_write(channel, data, len); written = uart_trans_write(channel, data, len);
return written == len;
} }
} }
static uint16_t App_SendTcpPayloadToUartRaw(uint8_t uart_index, const uint8_t *data, uint16_t len)
{
uart_channel_t channel = (uart_index == LINK_UART_U1) ? UART_CHANNEL_U1 : UART_CHANNEL_U0;
return uart_trans_write(channel, data, len);
}
static bool App_SendTcpPayloadToUartMux(uint8_t uart_index, uint8_t src_id, uint8_t dst_mask, const uint8_t *data, uint16_t len)
{
uart_channel_t channel = (uart_index == LINK_UART_U1) ? UART_CHANNEL_U1 : UART_CHANNEL_U0;
uint8_t frame[APP_TCP_TO_UART_CHUNK_SIZE + 6u];
uint16_t frame_len = 0u;
if (len == 0u || len > APP_TCP_TO_UART_CHUNK_SIZE) {
return false;
}
if (uart_trans_tx_free(channel) < (uint16_t)(len + 6u)) {
return false;
}
if (!uart_mux_encode_frame(src_id, dst_mask, data, len, frame, &frame_len, sizeof(frame))) {
return false;
}
return uart_trans_write(channel, frame, frame_len) == frame_len;
}
static void App_RouteTcpTraffic(void) static void App_RouteTcpTraffic(void)
{ {
const device_config_t *cfg = config_get(); const device_config_t *cfg = config_get();
uint8_t buffer[APP_ROUTE_BUFFER_SIZE]; uint8_t buffer[APP_TCP_TO_UART_CHUNK_SIZE];
for (uint8_t i = 0; i < TCP_SERVER_INSTANCE_COUNT; ++i) { for (uint8_t i = 0; i < TCP_SERVER_INSTANCE_COUNT; ++i) {
int rc = tcp_server_recv(i, buffer, sizeof(buffer)); uint16_t available = tcp_server_rx_available(i);
if (rc > 0) { if (available > 0u) {
uint8_t link_index = (i == 0u) ? CONFIG_LINK_S1 : CONFIG_LINK_S2; uint8_t link_index = (i == 0u) ? CONFIG_LINK_S1 : CONFIG_LINK_S2;
App_SendToUart(cfg->links[link_index].uart, uint8_t uart_index = cfg->links[link_index].uart;
config_link_index_to_endpoint(link_index), uint8_t src_id = config_link_index_to_endpoint(link_index);
config_uart_index_to_endpoint(cfg->links[link_index].uart), uint8_t dst_mask = config_uart_index_to_endpoint(uart_index);
buffer, uart_channel_t channel = (uart_index == LINK_UART_U1) ? UART_CHANNEL_U1 : UART_CHANNEL_U0;
(uint16_t)rc);
if (cfg->mux_mode == MUX_MODE_FRAME) {
uint16_t tx_free = uart_trans_tx_free(channel);
uint16_t payload_len;
if (tx_free <= 6u) {
return;
}
payload_len = available;
if (payload_len > APP_TCP_TO_UART_CHUNK_SIZE) {
payload_len = APP_TCP_TO_UART_CHUNK_SIZE;
}
if (payload_len > (uint16_t)(tx_free - 6u)) {
payload_len = (uint16_t)(tx_free - 6u);
}
if (payload_len == 0u) {
return;
}
payload_len = tcp_server_peek(i, buffer, payload_len);
if (payload_len == 0u) {
continue;
}
if (!App_SendTcpPayloadToUartMux(uart_index, src_id, dst_mask, buffer, payload_len)) {
return;
}
tcp_server_drop(i, payload_len);
} else {
uint16_t chunk = available;
uint16_t tx_free = uart_trans_tx_free(channel);
uint16_t written;
if (tx_free == 0u) {
return;
}
if (chunk > APP_TCP_TO_UART_CHUNK_SIZE) {
chunk = APP_TCP_TO_UART_CHUNK_SIZE;
}
if (chunk > tx_free) {
chunk = tx_free;
}
if (chunk == 0u) {
return;
}
chunk = tcp_server_peek(i, buffer, chunk);
if (chunk == 0u) {
continue;
}
written = App_SendTcpPayloadToUartRaw(uart_index, buffer, chunk);
if (written > 0u) {
tcp_server_drop(i, written);
}
if (written < chunk) {
return;
}
}
} }
} }
for (uint8_t i = 0; i < TCP_CLIENT_INSTANCE_COUNT; ++i) { for (uint8_t i = 0; i < TCP_CLIENT_INSTANCE_COUNT; ++i) {
int rc = tcp_client_recv(i, buffer, sizeof(buffer)); uint16_t available = tcp_client_rx_available(i);
if (rc > 0) { if (available > 0u) {
uint8_t link_index = (i == 0u) ? CONFIG_LINK_C1 : CONFIG_LINK_C2; uint8_t link_index = (i == 0u) ? CONFIG_LINK_C1 : CONFIG_LINK_C2;
App_SendToUart(cfg->links[link_index].uart, uint8_t uart_index = cfg->links[link_index].uart;
config_link_index_to_endpoint(link_index), uint8_t src_id = config_link_index_to_endpoint(link_index);
config_uart_index_to_endpoint(cfg->links[link_index].uart), uint8_t dst_mask = config_uart_index_to_endpoint(uart_index);
buffer, uart_channel_t channel = (uart_index == LINK_UART_U1) ? UART_CHANNEL_U1 : UART_CHANNEL_U0;
(uint16_t)rc);
if (cfg->mux_mode == MUX_MODE_FRAME) {
uint16_t tx_free = uart_trans_tx_free(channel);
uint16_t payload_len;
if (tx_free <= 6u) {
return;
}
payload_len = available;
if (payload_len > APP_TCP_TO_UART_CHUNK_SIZE) {
payload_len = APP_TCP_TO_UART_CHUNK_SIZE;
}
if (payload_len > (uint16_t)(tx_free - 6u)) {
payload_len = (uint16_t)(tx_free - 6u);
}
if (payload_len == 0u) {
return;
}
payload_len = tcp_client_peek(i, buffer, payload_len);
if (payload_len == 0u) {
continue;
}
if (!App_SendTcpPayloadToUartMux(uart_index, src_id, dst_mask, buffer, payload_len)) {
return;
}
tcp_client_drop(i, payload_len);
} else {
uint16_t chunk = available;
uint16_t tx_free = uart_trans_tx_free(channel);
uint16_t written;
if (tx_free == 0u) {
return;
}
if (chunk > APP_TCP_TO_UART_CHUNK_SIZE) {
chunk = APP_TCP_TO_UART_CHUNK_SIZE;
}
if (chunk > tx_free) {
chunk = tx_free;
}
if (chunk == 0u) {
return;
}
chunk = tcp_client_peek(i, buffer, chunk);
if (chunk == 0u) {
continue;
}
written = App_SendTcpPayloadToUartRaw(uart_index, buffer, chunk);
if (written > 0u) {
tcp_client_drop(i, written);
}
if (written < chunk) {
return;
}
}
} }
} }
} }
@@ -315,37 +465,57 @@ static void App_RouteRawUartTraffic(void)
len = uart_trans_read(UART_CHANNEL_U0, buffer, sizeof(buffer)); len = uart_trans_read(UART_CHANNEL_U0, buffer, sizeof(buffer));
if (len > 0u) { if (len > 0u) {
bool routed_ok = true;
for (uint8_t i = 0; i < CONFIG_LINK_COUNT; ++i) { for (uint8_t i = 0; i < CONFIG_LINK_COUNT; ++i) {
bool sent = true;
if (cfg->links[i].enabled == 0u || cfg->links[i].uart != LINK_UART_U0) { if (cfg->links[i].enabled == 0u || cfg->links[i].uart != LINK_UART_U0) {
continue; continue;
} }
if (i == CONFIG_LINK_S1) { if (i == CONFIG_LINK_S1) {
(void)tcp_server_send(0u, buffer, len); sent = App_SendTcpServerPayload(0u, buffer, len);
} else if (i == CONFIG_LINK_S2) { } else if (i == CONFIG_LINK_S2) {
(void)tcp_server_send(1u, buffer, len); sent = App_SendTcpServerPayload(1u, buffer, len);
} else if (i == CONFIG_LINK_C1) { } else if (i == CONFIG_LINK_C1) {
(void)tcp_client_send(0u, buffer, len); sent = App_SendTcpClientPayload(0u, buffer, len);
} else if (i == CONFIG_LINK_C2) { } else if (i == CONFIG_LINK_C2) {
(void)tcp_client_send(1u, buffer, len); sent = App_SendTcpClientPayload(1u, buffer, len);
} }
if (!sent) {
routed_ok = false;
}
}
if (!routed_ok) {
return;
} }
} }
len = uart_trans_read(UART_CHANNEL_U1, buffer, sizeof(buffer)); len = uart_trans_read(UART_CHANNEL_U1, buffer, sizeof(buffer));
if (len > 0u) { if (len > 0u) {
bool routed_ok = true;
for (uint8_t i = 0; i < CONFIG_LINK_COUNT; ++i) { for (uint8_t i = 0; i < CONFIG_LINK_COUNT; ++i) {
bool sent = true;
if (cfg->links[i].enabled == 0u || cfg->links[i].uart != LINK_UART_U1) { if (cfg->links[i].enabled == 0u || cfg->links[i].uart != LINK_UART_U1) {
continue; continue;
} }
if (i == CONFIG_LINK_S1) { if (i == CONFIG_LINK_S1) {
(void)tcp_server_send(0u, buffer, len); sent = App_SendTcpServerPayload(0u, buffer, len);
} else if (i == CONFIG_LINK_S2) { } else if (i == CONFIG_LINK_S2) {
(void)tcp_server_send(1u, buffer, len); sent = App_SendTcpServerPayload(1u, buffer, len);
} else if (i == CONFIG_LINK_C1) { } else if (i == CONFIG_LINK_C1) {
(void)tcp_client_send(0u, buffer, len); sent = App_SendTcpClientPayload(0u, buffer, len);
} else if (i == CONFIG_LINK_C2) { } else if (i == CONFIG_LINK_C2) {
(void)tcp_client_send(1u, buffer, len); sent = App_SendTcpClientPayload(1u, buffer, len);
} }
if (!sent) {
routed_ok = false;
}
}
if (!routed_ok) {
return;
} }
} }
} }
@@ -354,6 +524,7 @@ static void App_RouteMuxUartTraffic(void)
{ {
uart_mux_frame_t frame; uart_mux_frame_t frame;
const device_config_t *cfg = config_get(); const device_config_t *cfg = config_get();
bool routed_ok;
while (uart_mux_try_extract_frame(UART_CHANNEL_U0, &frame)) { while (uart_mux_try_extract_frame(UART_CHANNEL_U0, &frame)) {
#if defined(DEBUG) && (DEBUG != 0) #if defined(DEBUG) && (DEBUG != 0)
@@ -366,33 +537,42 @@ static void App_RouteMuxUartTraffic(void)
uint16_t response_len = (uint16_t)strlen(response_text); uint16_t response_len = (uint16_t)strlen(response_text);
uint16_t frame_len = 0u; uint16_t frame_len = 0u;
if (uart_mux_encode_frame(config_uart_index_to_endpoint(LINK_UART_U0), 0u, (const uint8_t *)response_text, response_len, g_mux_response_frame, &frame_len, sizeof(g_mux_response_frame))) { if (uart_mux_encode_frame(config_uart_index_to_endpoint(LINK_UART_U0), 0u, (const uint8_t *)response_text, response_len, g_mux_response_frame, &frame_len, sizeof(g_mux_response_frame))) {
(void)uart_trans_write(UART_CHANNEL_U0, g_mux_response_frame, frame_len); if (uart_trans_write(UART_CHANNEL_U0, g_mux_response_frame, frame_len) != frame_len) {
return;
}
} }
if (result == AT_NEED_REBOOT) { if (result == AT_NEED_REBOOT) {
static const char hint[] = "Note: Use AT+SAVE then AT+RESET to apply changes\r\n"; static const char hint[] = "Note: Use AT+SAVE then AT+RESET to apply changes\r\n";
response_len = (uint16_t)strlen(hint); response_len = (uint16_t)strlen(hint);
if (uart_mux_encode_frame(config_uart_index_to_endpoint(LINK_UART_U0), 0u, (const uint8_t *)hint, response_len, g_mux_response_frame, &frame_len, sizeof(g_mux_response_frame))) { if (uart_mux_encode_frame(config_uart_index_to_endpoint(LINK_UART_U0), 0u, (const uint8_t *)hint, response_len, g_mux_response_frame, &frame_len, sizeof(g_mux_response_frame))) {
(void)uart_trans_write(UART_CHANNEL_U0, g_mux_response_frame, frame_len); if (uart_trans_write(UART_CHANNEL_U0, g_mux_response_frame, frame_len) != frame_len) {
return;
}
} }
} }
} }
continue; continue;
} }
routed_ok = true;
if ((frame.dst_mask & ENDPOINT_S1) != 0u) { if ((frame.dst_mask & ENDPOINT_S1) != 0u) {
(void)tcp_server_send(0u, frame.payload, frame.payload_len); routed_ok = App_SendTcpServerPayload(0u, frame.payload, frame.payload_len) && routed_ok;
} }
if ((frame.dst_mask & ENDPOINT_S2) != 0u) { if ((frame.dst_mask & ENDPOINT_S2) != 0u) {
(void)tcp_server_send(1u, frame.payload, frame.payload_len); routed_ok = App_SendTcpServerPayload(1u, frame.payload, frame.payload_len) && routed_ok;
} }
if ((frame.dst_mask & ENDPOINT_C1) != 0u) { if ((frame.dst_mask & ENDPOINT_C1) != 0u) {
(void)tcp_client_send(0u, frame.payload, frame.payload_len); routed_ok = App_SendTcpClientPayload(0u, frame.payload, frame.payload_len) && routed_ok;
} }
if ((frame.dst_mask & ENDPOINT_C2) != 0u) { if ((frame.dst_mask & ENDPOINT_C2) != 0u) {
(void)tcp_client_send(1u, frame.payload, frame.payload_len); routed_ok = App_SendTcpClientPayload(1u, frame.payload, frame.payload_len) && routed_ok;
} }
if ((frame.dst_mask & ENDPOINT_UART3) != 0u && cfg->links[CONFIG_LINK_S2].uart == LINK_UART_U1) { if ((frame.dst_mask & ENDPOINT_UART3) != 0u && cfg->links[CONFIG_LINK_S2].uart == LINK_UART_U1) {
App_SendToUart(LINK_UART_U1, frame.src_id, ENDPOINT_UART3, frame.payload, frame.payload_len); routed_ok = App_SendToUart(LINK_UART_U1, frame.src_id, ENDPOINT_UART3, frame.payload, frame.payload_len) && routed_ok;
}
if (!routed_ok) {
return;
} }
} }
@@ -407,33 +587,42 @@ static void App_RouteMuxUartTraffic(void)
uint16_t response_len = (uint16_t)strlen(response_text); uint16_t response_len = (uint16_t)strlen(response_text);
uint16_t frame_len = 0u; uint16_t frame_len = 0u;
if (uart_mux_encode_frame(config_uart_index_to_endpoint(LINK_UART_U1), 0u, (const uint8_t *)response_text, response_len, g_mux_response_frame, &frame_len, sizeof(g_mux_response_frame))) { if (uart_mux_encode_frame(config_uart_index_to_endpoint(LINK_UART_U1), 0u, (const uint8_t *)response_text, response_len, g_mux_response_frame, &frame_len, sizeof(g_mux_response_frame))) {
(void)uart_trans_write(UART_CHANNEL_U1, g_mux_response_frame, frame_len); if (uart_trans_write(UART_CHANNEL_U1, g_mux_response_frame, frame_len) != frame_len) {
return;
}
} }
if (result == AT_NEED_REBOOT) { if (result == AT_NEED_REBOOT) {
static const char hint[] = "Note: Use AT+SAVE then AT+RESET to apply changes\r\n"; static const char hint[] = "Note: Use AT+SAVE then AT+RESET to apply changes\r\n";
response_len = (uint16_t)strlen(hint); response_len = (uint16_t)strlen(hint);
if (uart_mux_encode_frame(config_uart_index_to_endpoint(LINK_UART_U1), 0u, (const uint8_t *)hint, response_len, g_mux_response_frame, &frame_len, sizeof(g_mux_response_frame))) { if (uart_mux_encode_frame(config_uart_index_to_endpoint(LINK_UART_U1), 0u, (const uint8_t *)hint, response_len, g_mux_response_frame, &frame_len, sizeof(g_mux_response_frame))) {
(void)uart_trans_write(UART_CHANNEL_U1, g_mux_response_frame, frame_len); if (uart_trans_write(UART_CHANNEL_U1, g_mux_response_frame, frame_len) != frame_len) {
return;
}
} }
} }
} }
continue; continue;
} }
routed_ok = true;
if ((frame.dst_mask & ENDPOINT_S1) != 0u) { if ((frame.dst_mask & ENDPOINT_S1) != 0u) {
(void)tcp_server_send(0u, frame.payload, frame.payload_len); routed_ok = App_SendTcpServerPayload(0u, frame.payload, frame.payload_len) && routed_ok;
} }
if ((frame.dst_mask & ENDPOINT_S2) != 0u) { if ((frame.dst_mask & ENDPOINT_S2) != 0u) {
(void)tcp_server_send(1u, frame.payload, frame.payload_len); routed_ok = App_SendTcpServerPayload(1u, frame.payload, frame.payload_len) && routed_ok;
} }
if ((frame.dst_mask & ENDPOINT_C1) != 0u) { if ((frame.dst_mask & ENDPOINT_C1) != 0u) {
(void)tcp_client_send(0u, frame.payload, frame.payload_len); routed_ok = App_SendTcpClientPayload(0u, frame.payload, frame.payload_len) && routed_ok;
} }
if ((frame.dst_mask & ENDPOINT_C2) != 0u) { if ((frame.dst_mask & ENDPOINT_C2) != 0u) {
(void)tcp_client_send(1u, frame.payload, frame.payload_len); routed_ok = App_SendTcpClientPayload(1u, frame.payload, frame.payload_len) && routed_ok;
} }
if ((frame.dst_mask & ENDPOINT_UART2) != 0u) { if ((frame.dst_mask & ENDPOINT_UART2) != 0u) {
App_SendToUart(LINK_UART_U0, frame.src_id, ENDPOINT_UART2, frame.payload, frame.payload_len); routed_ok = App_SendToUart(LINK_UART_U0, frame.src_id, ENDPOINT_UART2, frame.payload, frame.payload_len) && routed_ok;
}
if (!routed_ok) {
return;
} }
} }
} }
@@ -448,6 +637,7 @@ static void App_Poll(void)
sys_check_timeouts(); sys_check_timeouts();
App_StopLinksIfNeeded(); App_StopLinksIfNeeded();
App_StartLinksIfNeeded(); App_StartLinksIfNeeded();
tcp_server_poll();
tcp_client_poll(); tcp_client_poll();
uart_trans_poll(); uart_trans_poll();
StackGuard_Check(); StackGuard_Check();
@@ -471,9 +661,6 @@ static void App_Poll(void)
NVIC_SystemReset(); NVIC_SystemReset();
} }
if (hiwdg.Instance == IWDG) {
HAL_IWDG_Refresh(&hiwdg);
}
} }
/* USER CODE END 0 */ /* USER CODE END 0 */
@@ -488,6 +675,7 @@ int main(void)
MX_USART3_UART_Init(); MX_USART3_UART_Init();
MX_SPI1_Init(); MX_SPI1_Init();
MX_TIM4_Init(); MX_TIM4_Init();
MX_IWDG_Init();
LED_Init(); LED_Init();
LED_StartBlink(); LED_StartBlink();
+112 -37
View File
@@ -40,12 +40,42 @@ static uint8_t g_ch390_ready;
static ch390_diag_t g_diag; static ch390_diag_t g_diag;
static uint8_t g_tx_consecutive_timeout; static uint8_t g_tx_consecutive_timeout;
static uint8_t g_chip_reset_count; static uint8_t g_chip_reset_count;
static uint8_t g_link_restart_pending;
#define TX_TIMEOUT_THRESHOLD 3u #define TX_BUSY_WAIT_TIMEOUT_MS 10u
#define CHIP_RESET_MAX 3u #define TX_TIMEOUT_RESET_THRESHOLD 6u
#define HEALTH_FAIL_THRESHOLD 3u
#define RESTART_PENDING_FLAG 0x01u
#define HEALTH_FAIL_SHIFT 4u
#define HEALTH_FAIL_MASK 0xF0u
#define TX_TIMEOUT_THRESHOLD 3u static bool ch390_mac_address_valid(const uint8_t *mac);
#define CHIP_RESET_MAX 3u
static uint8_t ch390_runtime_is_restart_pending(void)
{
return (uint8_t)(g_link_restart_pending & RESTART_PENDING_FLAG);
}
static void ch390_runtime_set_restart_pending(void)
{
g_link_restart_pending = (uint8_t)(g_link_restart_pending | RESTART_PENDING_FLAG);
}
static void ch390_runtime_clear_restart_pending(void)
{
g_link_restart_pending = (uint8_t)(g_link_restart_pending & (uint8_t)(~RESTART_PENDING_FLAG));
}
static uint8_t ch390_runtime_get_health_fail_count(void)
{
return (uint8_t)((g_link_restart_pending & HEALTH_FAIL_MASK) >> HEALTH_FAIL_SHIFT);
}
static void ch390_runtime_set_health_fail_count(uint8_t count)
{
g_link_restart_pending = (uint8_t)((g_link_restart_pending & (uint8_t)(~HEALTH_FAIL_MASK)) |
(uint8_t)((count << HEALTH_FAIL_SHIFT) & HEALTH_FAIL_MASK));
}
static uint8_t ch390_runtime_probe_identity(void) static uint8_t ch390_runtime_probe_identity(void)
{ {
@@ -76,6 +106,38 @@ static uint8_t ch390_runtime_probe_identity(void)
return g_diag.id_valid; return g_diag.id_valid;
} }
static void ch390_runtime_prepare_netif(struct netif *netif)
{
struct ethernetif *ethernetif;
if (netif == NULL) {
return;
}
netif->hwaddr_len = ETHARP_HWADDR_LEN;
netif->mtu = 1500;
netif->flags = NETIF_FLAG_BROADCAST | NETIF_FLAG_ETHARP | NETIF_FLAG_ETHERNET;
ethernetif = (struct ethernetif *)netif->state;
if (ethernetif != NULL) {
ethernetif->rx_len = 0u;
ethernetif->rx_status = 0u;
}
}
static void ch390_runtime_sync_mac(struct netif *netif)
{
if (netif == NULL) {
return;
}
if (ch390_mac_address_valid(netif->hwaddr)) {
ch390_set_mac_address(netif->hwaddr);
}
ch390_get_mac(netif->hwaddr);
}
static void ch390_runtime_refresh_diag(void) static void ch390_runtime_refresh_diag(void)
{ {
uint8_t id_valid = ch390_runtime_probe_identity(); uint8_t id_valid = ch390_runtime_probe_identity();
@@ -165,7 +227,7 @@ struct pbuf *ch390_runtime_input_frame(struct netif *netif)
return p; return p;
} }
bool ch390_mac_address_valid(const uint8_t *mac) static bool ch390_mac_address_valid(const uint8_t *mac)
{ {
if (mac == NULL) { if (mac == NULL) {
return false; return false;
@@ -180,8 +242,6 @@ bool ch390_mac_address_valid(const uint8_t *mac)
void ch390_runtime_init(struct netif *netif, const uint8_t *mac) void ch390_runtime_init(struct netif *netif, const uint8_t *mac)
{ {
struct ethernetif *ethernetif = (struct ethernetif *)netif->state;
SEGGER_RTT_WriteString(0, "ETH init: gpio\r\n"); SEGGER_RTT_WriteString(0, "ETH init: gpio\r\n");
ch390_gpio_init(); ch390_gpio_init();
SEGGER_RTT_WriteString(0, "ETH init: spi\r\n"); SEGGER_RTT_WriteString(0, "ETH init: spi\r\n");
@@ -192,13 +252,7 @@ void ch390_runtime_init(struct netif *netif, const uint8_t *mac)
SEGGER_RTT_WriteString(0, "ETH init: probe\r\n"); SEGGER_RTT_WriteString(0, "ETH init: probe\r\n");
g_ch390_ready = ch390_runtime_probe_identity(); g_ch390_ready = ch390_runtime_probe_identity();
if (g_ch390_ready == 0u) { if (g_ch390_ready == 0u) {
netif->hwaddr_len = ETHARP_HWADDR_LEN; ch390_runtime_prepare_netif(netif);
netif->mtu = 1500;
netif->flags = NETIF_FLAG_BROADCAST | NETIF_FLAG_ETHARP | NETIF_FLAG_ETHERNET;
ethernetif->rx_len = 0u;
ethernetif->rx_status = 0u;
netif_set_link_down(netif); netif_set_link_down(netif);
SEGGER_RTT_WriteString(0, "ETH init: invalid chip id\r\n"); SEGGER_RTT_WriteString(0, "ETH init: invalid chip id\r\n");
return; return;
@@ -221,14 +275,9 @@ void ch390_runtime_init(struct netif *netif, const uint8_t *mac)
} }
} }
netif->hwaddr_len = ETHARP_HWADDR_LEN;
SEGGER_RTT_WriteString(0, "ETH init: getmac\r\n"); SEGGER_RTT_WriteString(0, "ETH init: getmac\r\n");
ch390_runtime_prepare_netif(netif);
ch390_get_mac(netif->hwaddr); ch390_get_mac(netif->hwaddr);
netif->mtu = 1500;
netif->flags = NETIF_FLAG_BROADCAST | NETIF_FLAG_ETHARP | NETIF_FLAG_ETHERNET;
ethernetif->rx_len = 0u;
ethernetif->rx_status = 0u;
ch390_runtime_refresh_diag(); ch390_runtime_refresh_diag();
g_ch390_ready = g_diag.id_valid; g_ch390_ready = g_diag.id_valid;
@@ -306,6 +355,13 @@ void ch390_runtime_check_link(struct netif *netif)
return; return;
} }
if (ch390_runtime_is_restart_pending() != 0u) {
netif_set_link_down(netif);
ch390_runtime_clear_restart_pending();
SEGGER_RTT_WriteString(0, "ETH restart pending: hold link down for app recycle\r\n");
return;
}
ch390_runtime_refresh_diag(); ch390_runtime_refresh_diag();
link_up = (uint8_t)ch390_get_link_status(); link_up = (uint8_t)ch390_get_link_status();
@@ -333,8 +389,6 @@ err_t ch390_runtime_output(struct netif *netif, struct pbuf *p)
struct pbuf *q; struct pbuf *q;
uint32_t start_tick; uint32_t start_tick;
LWIP_UNUSED_ARG(netif);
if (!g_ch390_ready) { if (!g_ch390_ready) {
LINK_STATS_INC(link.drop); LINK_STATS_INC(link.drop);
return ERR_IF; return ERR_IF;
@@ -346,15 +400,17 @@ err_t ch390_runtime_output(struct netif *netif, struct pbuf *p)
start_tick = HAL_GetTick(); start_tick = HAL_GetTick();
while (ch390_read_reg(CH390_TCR) & TCR_TXREQ) { while (ch390_read_reg(CH390_TCR) & TCR_TXREQ) {
if ((HAL_GetTick() - start_tick) > 10u) { if ((HAL_GetTick() - start_tick) > TX_BUSY_WAIT_TIMEOUT_MS) {
#if ETH_PAD_SIZE #if ETH_PAD_SIZE
pbuf_add_header(p, ETH_PAD_SIZE); pbuf_add_header(p, ETH_PAD_SIZE);
#endif #endif
LINK_STATS_INC(link.drop); LINK_STATS_INC(link.drop);
g_diag.tx_packets_timeout++; g_diag.tx_packets_timeout++;
if (g_tx_consecutive_timeout < 0xFFu) {
g_tx_consecutive_timeout++; g_tx_consecutive_timeout++;
if (g_tx_consecutive_timeout >= TX_TIMEOUT_THRESHOLD) { }
ch390_runtime_emergency_reset(); if (g_tx_consecutive_timeout >= TX_TIMEOUT_RESET_THRESHOLD) {
(void)ch390_runtime_emergency_reset(netif);
} }
return ERR_TIMEOUT; return ERR_TIMEOUT;
} }
@@ -392,23 +448,26 @@ bool ch390_runtime_is_ready(void)
return g_ch390_ready != 0u; return g_ch390_ready != 0u;
} }
bool ch390_runtime_emergency_reset(void) bool ch390_runtime_emergency_reset(struct netif *netif)
{ {
SEGGER_RTT_printf(0, "ETH emergency reset (tx_timeout=%u resets=%u/%u)\r\n", SEGGER_RTT_printf(0, "ETH emergency reset (tx_timeout=%u resets=%u)\r\n",
g_tx_consecutive_timeout, g_chip_reset_count, CHIP_RESET_MAX); g_tx_consecutive_timeout, g_chip_reset_count);
if (g_chip_reset_count >= CHIP_RESET_MAX) { if (netif != NULL) {
SEGGER_RTT_WriteString(0, "ETH: max resets reached, giving up\r\n"); netif_set_link_down(netif);
g_ch390_ready = 0u;
return false;
} }
if (g_chip_reset_count < 0xFFu) {
g_chip_reset_count++; g_chip_reset_count++;
}
g_tx_consecutive_timeout = 0u; g_tx_consecutive_timeout = 0u;
ch390_software_reset(); ch390_software_reset();
ch390_delay_us(5000u); ch390_delay_us(5000u);
ch390_default_config(); ch390_default_config();
ch390_runtime_prepare_netif(netif);
ch390_runtime_sync_mac(netif);
g_ch390_irq_pending = 0u;
ch390_runtime_refresh_diag(); ch390_runtime_refresh_diag();
g_ch390_ready = g_diag.id_valid; g_ch390_ready = g_diag.id_valid;
@@ -418,24 +477,40 @@ bool ch390_runtime_emergency_reset(void)
return false; return false;
} }
ch390_runtime_set_health_fail_count(0u);
ch390_runtime_set_restart_pending();
SEGGER_RTT_WriteString(0, "ETH emergency reset: OK\r\n"); SEGGER_RTT_WriteString(0, "ETH emergency reset: OK\r\n");
return true; return true;
} }
void ch390_runtime_health_check(struct netif *netif) void ch390_runtime_health_check(struct netif *netif)
{ {
uint16_t vid;
uint8_t fail_count;
if (!g_ch390_ready) { if (!g_ch390_ready) {
SEGGER_RTT_WriteString(0, "ETH health: chip not ready, attempting reset\r\n");
(void)ch390_runtime_emergency_reset(netif);
return; return;
} }
/* Verify chip is still responding by reading vendor ID */ /* Verify chip is still responding by reading vendor ID */
uint16_t vid = ch390_get_vendor_id(); vid = ch390_get_vendor_id();
if (vid == 0x0000u || vid == 0xFFFFu) { if (vid == 0x0000u || vid == 0xFFFFu) {
SEGGER_RTT_printf(0, "ETH health: invalid VID=0x%04X, attempting reset\r\n", vid); fail_count = ch390_runtime_get_health_fail_count();
netif_set_link_down(netif); if (fail_count < 0x0Fu) {
if (ch390_runtime_emergency_reset()) { fail_count++;
ch390_runtime_check_link(netif);
} }
ch390_runtime_set_health_fail_count(fail_count);
if (fail_count >= HEALTH_FAIL_THRESHOLD) {
SEGGER_RTT_printf(0, "ETH health: invalid VID=0x%04X streak=%u, attempting reset\r\n",
vid,
fail_count);
ch390_runtime_set_health_fail_count(0u);
(void)ch390_runtime_emergency_reset(netif);
}
} else {
ch390_runtime_set_health_fail_count(0u);
} }
} }
+1 -1
View File
@@ -58,7 +58,7 @@ void ch390_runtime_check_link(struct netif *netif);
err_t ch390_runtime_output(struct netif *netif, struct pbuf *p); err_t ch390_runtime_output(struct netif *netif, struct pbuf *p);
void ch390_runtime_get_diag(ch390_diag_t *diag); void ch390_runtime_get_diag(ch390_diag_t *diag);
bool ch390_runtime_is_ready(void); bool ch390_runtime_is_ready(void);
bool ch390_runtime_emergency_reset(void); bool ch390_runtime_emergency_reset(struct netif *netif);
void ch390_runtime_health_check(struct netif *netif); void ch390_runtime_health_check(struct netif *netif);
uint8_t ch390_runtime_get_reset_count(void); uint8_t ch390_runtime_get_reset_count(void);
+10 -10
View File
@@ -1,8 +1,8 @@
Code (inc. data) RO Data RW Data ZI Data Debug Object Name Code (inc. data) RO Data RW Data ZI Data Debug Object Name
632 0 0 0 0 0 ch390.o 632 0 0 0 0 0 ch390.o
616 0 64 0 0 0 ch390_interface.o 616 0 64 0 0 0 ch390_interface.o
1858 0 85 5 88 0 ch390_runtime.o 2050 0 85 6 88 0 ch390_runtime.o
3690 0 591 8 1240 0 config.o 3958 0 591 8 1240 0 config.o
8 0 0 0 0 0 def.o 8 0 0 0 0 0 def.o
124 0 0 0 0 0 dma.o 124 0 0 0 0 0 dma.o
1772 0 0 1 240 0 etharp.o 1772 0 0 1 240 0 etharp.o
@@ -16,8 +16,8 @@
0 0 0 0 24 0 ip.o 0 0 0 0 24 0 ip.o
778 0 0 2 0 0 ip4.o 778 0 0 2 0 0 ip4.o
46 0 4 0 0 0 ip4_addr.o 46 0 4 0 0 0 ip4_addr.o
0 0 0 0 12 0 iwdg.o 44 0 0 0 12 0 iwdg.o
2694 0 185 12 272 0 main.o 3212 0 185 12 272 0 main.o
828 0 0 12 4115 0 mem.o 828 0 0 12 4115 0 mem.o
196 0 244 32 6464 0 memp.o 196 0 244 32 6464 0 memp.o
582 0 0 12 0 0 netif.o 582 0 0 12 0 0 netif.o
@@ -33,7 +33,7 @@
392 0 0 0 32 0 stm32f1xx_hal_flash.o 392 0 0 0 32 0 stm32f1xx_hal_flash.o
240 0 0 0 0 0 stm32f1xx_hal_flash_ex.o 240 0 0 0 0 0 stm32f1xx_hal_flash_ex.o
516 0 0 0 0 0 stm32f1xx_hal_gpio.o 516 0 0 0 0 0 stm32f1xx_hal_gpio.o
12 0 0 0 0 0 stm32f1xx_hal_iwdg.o 106 0 0 0 0 0 stm32f1xx_hal_iwdg.o
60 0 0 0 0 0 stm32f1xx_hal_msp.o 60 0 0 0 0 0 stm32f1xx_hal_msp.o
1240 0 18 0 0 0 stm32f1xx_hal_rcc.o 1240 0 18 0 0 0 stm32f1xx_hal_rcc.o
1510 0 0 0 0 0 stm32f1xx_hal_spi.o 1510 0 0 0 0 0 stm32f1xx_hal_spi.o
@@ -43,13 +43,13 @@
490 0 0 0 0 0 stm32f1xx_it.o 490 0 0 0 0 0 stm32f1xx_it.o
2 0 24 4 0 0 system_stm32f1xx.o 2 0 24 4 0 0 system_stm32f1xx.o
3474 0 193 32 0 0 tcp.o 3474 0 193 32 0 0 tcp.o
1212 0 0 0 1120 0 tcp_client.o 1556 0 0 0 1072 0 tcp_client.o
3684 0 0 36 20 0 tcp_in.o 3684 0 0 36 20 0 tcp_in.o
3862 0 0 0 0 0 tcp_out.o 3862 0 0 0 0 0 tcp_out.o
966 0 0 0 1104 0 tcp_server.o 1346 0 0 0 1048 0 tcp_server.o
164 0 0 0 72 0 tim.o 164 0 0 0 72 0 tim.o
374 0 16 12 0 0 timeouts.o 374 0 16 12 0 0 timeouts.o
1296 0 0 0 2936 0 uart_trans.o 1590 0 0 0 2936 0 uart_trans.o
816 0 0 0 624 0 usart.o 816 0 0 0 624 0 usart.o
Object Totals Object Totals
@@ -57,8 +57,8 @@ Memory Map of the image
Load Region LR_IROM1 Load Region LR_IROM1
Execution Region ER_IROM1 (Exec base: 0x08000000, Size: 0x0000D328, Max: 0x00010000, END) Execution Region ER_IROM1 (Exec base: 0x08000000, Size: 0x0000DB7C, Max: 0x00010000, END)
Execution Region RW_IRAM1 (Exec base: 0x20000000, Size: 0x00005000, Max: 0x00005000, END) Execution Region RW_IRAM1 (Exec base: 0x20000000, Size: 0x00004F98, Max: 0x00005000, END)
Image component sizes Image component sizes
+123 -9
View File
@@ -225,11 +225,21 @@
5. `AT+MUX?` 5. `AT+MUX?`
6. `AT+NET=...` 6. `AT+NET=...`
7. `AT+NET?` 7. `AT+NET?`
8. `AT+LINK=...` 8. `AT+BAUD=...`
9. `AT+LINK?` 9. `AT+BAUD?`
10. `AT+SAVE` 10. `AT+LINK=...`
11. `AT+RESET` 11. `AT+LINK?`
12. `AT+DEFAULT` 12. `AT+SAVE`
13. `AT+RESET`
14. `AT+DEFAULT`
其中与数据串口相关的固定映射为:
1. `U0 = USART2`
2. `U1 = USART3`
3. `AT+BAUD=U0,<baud>` / `AT+BAUD=U1,<baud>` 只更新当前运行配置记录
4. 新波特率不会立即重初始化 `USART2/USART3`,必须执行 `AT+SAVE` + `AT+RESET` 后按保存值生效
5. 当前代码接受的波特率范围为 `1200 ~ 921600`
### 6.2 现场关键规则 ### 6.2 现场关键规则
@@ -238,6 +248,7 @@
1. 当前现场验证时,配置命令必须保证以换行完成帧。 1. 当前现场验证时,配置命令必须保证以换行完成帧。
2. 若主机侧发送方式不对,现象会很像“配置口完全无响应”。 2. 若主机侧发送方式不对,现象会很像“配置口完全无响应”。
3. 因此,配置口不响应时,第一优先级不是改 parser,而是先验证主机端发送格式与接线。 3. 因此,配置口不响应时,第一优先级不是改 parser,而是先验证主机端发送格式与接线。
4. `BAUD` 类命令若查询值已变化,但 `USART2/USART3` 现场波特率尚未变化,不应立即归因为命令无效,应先确认是否已经执行 `AT+SAVE``AT+RESET`
### 6.3 最小验证步骤 ### 6.3 最小验证步骤
@@ -247,13 +258,16 @@
2. 先发 `AT` 2. 先发 `AT`
3. 再发 `AT+QUERY` 3. 再发 `AT+QUERY`
4. 再发 `AT+NET?` 4. 再发 `AT+NET?`
5. 再发 `AT+LINK?` 5. 再发 `AT+BAUD?`
6. 修改一个最小参数,例如: 6. 再发 `AT+LINK?`
7. 修改一个最小参数,例如:
- `AT+MUX=1` - `AT+MUX=1`
7. 执行: -`AT+BAUD=U1,38400`
8. 执行:
- `AT+SAVE` - `AT+SAVE`
- `AT+RESET` - `AT+RESET`
8. 复位后再次查询,确认配置是否保留 9. 复位后再次查询,确认配置是否保留
10. 若本轮验证的是 `AT+BAUD`,还应同步用上位机重新按新波特率连接 `USART2/USART3`,确认数据口实际生效
### 6.4 持久化失败时怎么查 ### 6.4 持久化失败时怎么查
@@ -448,6 +462,106 @@ MUX 模式启动后,一段时间后网口失联。重新插拔网线无法恢
Keil MDK-ARM 构建 0 Error(s), 0 Warning(s)。Flash 52.7 KB / 64.0 KB (82.5%)RAM 20.0 KB / 20.0 KB (100%)。 Keil MDK-ARM 构建 0 Error(s), 0 Warning(s)。Flash 52.7 KB / 64.0 KB (82.5%)RAM 20.0 KB / 20.0 KB (100%)。
### 9.5 2026-04-18 MUX 模式丢包修复记录
#### 现象
`MUX=1` 模式下进行持续发送测试时,主机侧发送 `500` 个数据包,只收到 `360` 个,存在明显丢包。
#### 根因
本轮定位确认软件侧至少存在以下两个直接丢包点:
1. `App/uart_trans.c``uart_mux_try_extract_frame()` 在确认整帧完整前,就先消费 `SYNC` 与 header。若 MUX 帧跨越多个 poll 周期到达,半帧会被提前移出 RX ring,导致当前帧失步并被直接丢弃。
2. `App/tcp_server.c``App/tcp_client.c``Core/Src/main.c` 的发送路径对背压与短写处理不完整:
- `tcp_sndbuf() < len`
- `tcp_write()` / `tcp_output()` 返回 `ERR_MEM`
- `uart_trans_write()` 只写入部分字节
以上情况在旧代码中会被上层静默忽略,表现为“发送函数返回但数据实际未完整进入下游链路”。
#### 修复内容
| 文件 | 修改 | 说明 |
|------|------|------|
| `App/uart_trans.c` | 将 `uart_mux_try_extract_frame()` 改为先窥视、后消费 | 只有在 `SYNC + header + payload + tail` 全部可用时才推进 `rx_tail`,避免半帧被破坏性消费 |
| `App/tcp_server.c` | `tcp_server_send()``tcp_sndbuf()<len``ERR_MEM` 返回 `0` 并计入错误 | 明确表示本次发送未被底层接收,不再伪装成成功 |
| `App/tcp_client.c` | `tcp_client_send()` 同步处理背压与 `ERR_MEM` | 逻辑与 server 侧保持一致 |
| `Core/Src/main.c` | `App_SendToUart()` 检查 `uart_trans_write()` 是否完整写入 | TX ring 空间不足时立即显式失败 |
| `Core/Src/main.c` | `App_RouteTcpTraffic()` / `App_RouteRawUartTraffic()` / `App_RouteMuxUartTraffic()` 统一检查发送结果 | 不再把背压、短写和未完整提交静默当成成功 |
#### 结果验证
1. Keil MDK-ARM 构建通过,`0 Error(s), 0 Warning(s)`
2. 在最新固件下重新进行 MUX 持续发送测试,主机侧发送 `670` 个数据包,接收 `670` 个,`0` 丢包。
3. 本轮修复未增加新的常驻队列与缓冲区,保持当前 RAM 占用边界不变。
### 9.6 2026-04-24 CH390 emergency reset 恢复语义补齐记录
#### 现象
在 CH390 发生 TX timeout 并触发 `ch390_runtime_emergency_reset()` 后,芯片寄存器访问恢复正常,`VID` 可读、PHY 链路也可能保持 `up`,但 TCP 业务流量仍可能长时间不恢复,表现为“芯片还活着,但网络像失联一样,通常只能重启恢复”。
在后续实现收敛中,又确认仅依赖单次 VID 异常或单次 TX busy 即立刻 reset 过于激进,容易把瞬时抖动误判为芯片失活,因此当前代码已经演化为“带阈值的恢复策略”。
#### 根因
`ch390_runtime_emergency_reset()` 旧实现仅执行 `ch390_software_reset()``ch390_default_config()``diag` 刷新,缺少 cold init 里已有的两层恢复语义:
1. **MAC 对齐未恢复**:旧代码没有重新写回 CH390 `PAR`,也没有把硬件 MAC 重新同步到 `netif->hwaddr`。若软件复位后 CH390 的 MAC 过滤状态与 lwIP 侧缓存身份不一致,现象会表现为寄存器可访问、链路仍在,但单播业务流量不通。
2. **上层链路回收未触发**TX-timeout 路径直接调用 `ch390_runtime_emergency_reset()`,没有保证 `App_StopLinksIfNeeded()` / `App_StartLinksIfNeeded()` 观察到一次有效的 link-down 周期,导致旧 TCP client/server 状态可能跨芯片复位残留,业务层没有完成重建。
3. **恢复策略缺少抖动抑制**:若仅凭单次 TX busy 或单次 VID 异常立即 reset,容易在瞬时总线/链路抖动下过度恢复,放大业务扰动,因此当前实现增加了连续失败阈值和失败计数清零逻辑。
#### 修复内容
| 文件 | 修改 | 说明 |
|------|------|------|
| `Drivers/CH390/ch390_runtime.h` | `ch390_runtime_emergency_reset()` 改为接收 `struct netif *` | 让 reset 路径能同时修复 CH390 与 lwIP 可见状态 |
| `Drivers/CH390/ch390_runtime.c` | 抽取 `ch390_runtime_prepare_netif()` | 在 init / emergency reset 后统一恢复 `hwaddr_len``mtu``flags` 与 RX 软件状态 |
| `Drivers/CH390/ch390_runtime.c` | 新增 `ch390_runtime_sync_mac()` | emergency reset 后按当前 `netif->hwaddr` 重写 CH390 `PAR`,并重新同步硬件 MAC 到 lwIP |
| `Drivers/CH390/ch390_runtime.c` | emergency reset 成功后清 `g_ch390_irq_pending` 并置位 `g_link_restart_pending` | 避免复位前遗留中断状态影响恢复 |
| `Drivers/CH390/ch390_runtime.c` | `ch390_runtime_check_link()` 增加一次性 hold-down 逻辑 | 保证主循环至少看到一次 link-down,从而触发 app 层 stop/start 回收重建 |
| `Drivers/CH390/ch390_runtime.c` | TX-timeout 与 health-check 两条 reset 路径统一传入 `netif` | 让两类恢复路径都走同一套 MAC 重同步与链路重建语义 |
| `Drivers/CH390/ch390_runtime.c` | 为 TX timeout 与 health-check 增加连续失败阈值 | 降低瞬时抖动导致的过度 reset 风险 |
#### 当前实现语义(以源码为准)
1. **TX timeout 阈值**
- 单次判定条件:`CH390_TCR.TXREQ` busy-wait 持续超过 `10 ms`
- 连续阈值:累计 `6` 次后才触发 `ch390_runtime_emergency_reset()`
- 只要有一次成功发送,`g_tx_consecutive_timeout` 立即清零,因此该阈值针对的是**连续失败**,不是累计历史失败次数。
2. **health-check 阈值**
- `VID` 单次读到 `0x0000` / `0xFFFF` 并不会立即 reset。
- 只有连续 `3` 次异常 VID 才触发 emergency reset。
-`g_ch390_ready == 0`,则 health-check 会直接尝试 reset,不再等待 VID 连续计数。
3. **restart-pending 的单次语义**
- emergency reset 成功后会置位 restart-pending。
- 下一次 `ch390_runtime_check_link()` 先强制执行一次 `netif_set_link_down()`,随后立即清除此标志并提前返回。
- 该设计用于保证主循环至少看到一次有效的 logical link-down,从而沿用现有 `App_StopLinksIfNeeded()` / `App_StartLinksIfNeeded()` 路径回收并重建 TCP links。
4. **内部计数与状态**
- `g_chip_reset_count`:记录 emergency reset 尝试次数,饱和递增到 `0xFF`
- `g_tx_consecutive_timeout`:记录连续 TX busy 超时次数;成功发送或进入 reset 路径后清零。
- health-check 连续失败计数当前与 `g_link_restart_pending` 共用一个状态字节的高位 nibble 存储;当 VID 恢复正常、达到 reset 阈值或 emergency reset 成功时会清零。
5. **失败路径差异**
- 只有当 emergency reset 完成后 `g_diag.id_valid` 仍然有效,才会置位 restart-pending 并进入后续 app recycle 语义。
- 若 reset 后芯片仍不响应,则仅记录失败并返回,不会伪装成可恢复状态。
#### 预期结果
1. CH390 发生 emergency reset 后,硬件 MAC、`netif->hwaddr` 与当前业务身份重新对齐。
2. 即使物理网线始终保持连接,主循环仍会在后续 poll 中观察到一次有效 link-down,并按既有 `App_StopLinksIfNeeded()` / `App_StartLinksIfNeeded()` 路径回收并重建 TCP links。
3. 恢复策略对瞬时异常更保守:只有连续超时或连续 VID 异常达到阈值才会触发 reset,降低误触发恢复的概率。
4. 复位后的恢复语义与 cold init 更接近,不再停留在“芯片寄存器恢复正常,但业务流量仍死掉”的半恢复状态。
#### 构建验证
1. 已由现场手动执行工程构建,构建通过。
2. 本轮修改覆盖 `Drivers/CH390/ch390_runtime.c``Drivers/CH390/ch390_runtime.h` 与本手册记录,未改动 TCP client/server 模块对外接口。
--- ---
## 10. 常见误区 ## 10. 常见误区
+62
View File
@@ -184,6 +184,68 @@ EN,LPORT,RIP,RPORT,UART
2. 统一受 `LINK[idx]` 配置驱动 2. 统一受 `LINK[idx]` 配置驱动
3. 由调度层决定实例与 UART 的数据交换路径 3. 由调度层决定实例与 UART 的数据交换路径
### 6.4 `v1.1.0` 低 RAM TCP 背压修复
`v1.1.0` 起,`TCP -> UART` 路径补充如下实现约束,用于解决“TCP 接收过快、UART 发送过慢时本地缓存被冲垮”的问题,同时尽量不新增静态 RAM:
1. 继续复用 `tcp_server` / `tcp_client` 现有 `RX ring`,不为每个连接新增独立的大块 pending payload 缓冲。
2. `tcp_server_on_recv()` / `tcp_client_on_recv()` 不再在回调内立即 `tcp_recved()`
3. lwIP 交来的 `pbuf` 在回调中通过 `pbuf_ref()` 转为应用持有,再释放回调上下文的原始引用;后续由应用在主循环中继续把数据泵入 `RX ring`,最终在消费完成后释放。
4.`RX ring` 暂时装不下时,剩余数据保留在 `hold_pbuf + hold_offset` 中,等待主循环下一轮继续搬运。
5. 只有当数据真正从 `TCP RX ring``drop` 掉,也就是已经被下游 `UART` 接收进入发送路径时,才调用 `tcp_recved()` 释放 TCP 接收窗口。
这样做的效果是:
1. `UART` 慢时,TCP 窗口不会继续无条件放大。
2. 对端发送速度会被 lwIP 接收窗口自然压制。
3. 修复点建立在已有 ring 与主循环调度之上,不引入 `FreeRTOS` 或新的大块静态缓存。
#### RAW 与 MUX 的分流规则
`v1.1.0` 中,`TCP` 侧拿到的都是纯 payload,因此 `TCP` 背压逻辑在 `RAW``MUX` 两种模式下共用到 `UART commit` 之前:
1. `RAW` 模式:
- 主循环先查看 `uart_trans_tx_free()`
- 再按 `min(tcp_available, tx_free, APP_TCP_TO_UART_CHUNK_SIZE)` 从 TCP ring `peek`
- `uart_trans_write()` 实际写入多少,就 `drop + tcp_recved` 多少
2. `MUX` 模式:
- `TCP` payload 本身不带帧头尾
- 只有当 `UART TX free >= payload_len + 6` 时,才在栈上临时编码一帧并一次性写入 `UART TX ring`
- 只有整帧成功入队后,才按原始 payload 长度执行 `drop + tcp_recved`
该设计保证:
1. `RAW` 模式允许流式逐步提交
2. `MUX` 模式保持“单个 UART 输出帧必须完整入队”的语义
3. `TCP` 接收窗口始终以真实下游消费进度为准,而不是以“回调里已经 memcpy 到本地”作为提交点
#### RAM 与 chunk 策略
为给新增的 `hold_pbuf / hold_offset` 状态字段让位,并进一步降低单轮转发压力,`v1.1.0` 同步采用以下策略:
1. 新增 `APP_TCP_TO_UART_CHUNK_SIZE = 128`
2. `TCP_SERVER_RX_BUFFER_SIZE``512` 调整为 `480`
3. `TCP_CLIENT_RX_BUFFER_SIZE``512` 调整为 `480`
设计意图:
1. 利用更小的单次转发块提升主循环调度颗粒度
2.`MUX` 模式下 `payload + 6` 更容易完整进入 `UART TX ring`
3. 在静态 RAM 已接近上限时,为少量新状态字段回收空间
#### 构建基线
`v1.1.0``MDK-ARM/TCP2UART.uvprojx``TCP2UART` Target 为构建验收基线。
当前一次通过的参考结果:
1. `errors = 0`
2. `warnings = 0`
3. `flash_bytes = 56544`
4. `ram_bytes = 20376`
该结果说明修复后工程仍满足 `STM32F103R8T6``20KB RAM` 上限,但余量已经很小;后续若继续增加功能,应优先考虑复用现有缓冲与状态,而不是增加新的静态大数组。
## 七、主循环实现方向 ## 七、主循环实现方向
主循环仍保持裸机轮询风格: 主循环仍保持裸机轮询风格: