Platformatic Python-Node v2.0.0: Unleashing Real-Time Streaming and WebSocket Capabilities
Platformatic Python-Node v2.0.0 introduces full support for HTTP response streaming and bidirectional WebSocket communication, enabling full-stack teams to build high-performance, real-time applications by seamlessly bridging Python's async ecosystem with Node.js. This enhances integration, reduces memory footprint, and unlocks new possibilities for modern web development.

Platformatic Python-Node v2.0.0 introduces comprehensive support for HTTP response streaming and bidirectional WebSocket communication. This significant release empowers full-stack teams to develop real-time, high-performance applications by seamlessly connecting Python's asynchronous ecosystem with Node.js.
For teams utilizing Python ASGI applications alongside Node.js services, this update unlocks a new generation of application types. These include real-time dashboards, live data feeds, WebSocket-powered chat systems, progressive file uploads, and server-sent events, all while maintaining the robust Python-Node.js integration expected from Platformatic.
For those new to @platformatic/python-node, it is a module that facilitates running Python ASGI applications (such as FastAPI, Starlette, or Django) directly within Node.js processes. This eliminates the need for a separate Python server, mitigates HTTP proxy overhead, and simplifies deployment setups.
What's New in v2.0.0
This release delivers four major enhancements that align @platformatic/python-node with modern ASGI server capabilities:
HTTP Response Streaming
The new handleStream() method enables efficient streaming of HTTP responses. Instead of buffering the entire response body before returning it to Node.js, data chunks are processed incrementally as they arrive from your Python application. This approach drastically reduces memory usage for large responses and provides immediate access to response headers even before the body transmission completes.
Each chunk is pulled from Python only when Node.js is ready to process it. Python, in turn, waits until a chunk is requested before continuing its ASGI handler processing. This architecture establishes proper backpressure between the two languages, ensuring full asynchronous operation on both ends, allowing either language to perform other tasks while awaiting the other.
const res = await python.handleStream(req);
// Headers available immediately
console.log(res.status); // 200
console.log(res.headers.get('content-type'));
// Body consumed via AsyncIterator as chunks arrive
for await (const chunk of res) {
console.log(chunk.toString());
}
HTTP Request Streaming
In addition to streaming responses, you can now stream request bodies to Python. This capability is essential for handling large file uploads, processing data progressively, or implementing custom streaming protocols.
Each write operation returns a promise, providing backpressure from Python to prevent Node.js from writing excessive data if Python is not consuming it quickly enough. An internal buffer is used; if sufficient space is available, the promise resolves immediately; otherwise, it waits for space to become available.
const req = new Request({
method: 'POST',
url: '/upload',
headers: { 'Content-Type': 'application/octet-stream' }
});
// Dispatch request and write body concurrently
const [res] = await Promise.all([
python.handleStream(req),
async () => {
// Stream chunks to Python
await req.write(chunk1);
await req.write(chunk2);
await req.write(chunk3);
await req.end();
}()
]);
Bidirectional WebSocket Support
Full WebSocket support means your Python ASGI applications can now manage persistent, bidirectional connections. Whether you are developing a chat application, a live dashboard, or a multiplayer game, you can implement the core WebSocket logic in Python while integrating seamlessly with your Node.js infrastructure.
const req = new Request({ url: '/ws', websocket: true });
const res = await python.handleStream(req);
// Send messages to Python
await req.write('Hello from Node.js!');
// Receive messages from Python
for await (const chunk of res) {
console.log('Received:', chunk.toString());
}
ASGI 3.0 Protocol Implementation
Under the hood, v2.0.0 fully implements the ASGI 3.0 protocol specification for both HTTP and WebSocket communication. This ensures broad compatibility with the entire Python asynchronous ecosystem, including FastAPI's StreamingResponse, Starlette's WebSocket endpoints, and any other ASGI-compliant framework.
Key Benefits
- Lower Memory Footprint: Stream large responses without buffering everything in memory.
- Faster Time-to-First-Byte: Access response headers immediately, before the body writing even begins.
- Real-Time Capabilities: Build WebSocket applications with true bidirectional communication.
- Better Resource Utilization: Process chunks as they arrive instead of waiting for full completion.
- Backward Compatible: Existing code utilizing
handleRequest()continues to function almost entirely unchanged, with the single breaking change being the removal of thereq.bodysetter/getter.
HTTP Streaming in Action: Server-Sent Events with FastAPI
One of the most powerful applications for HTTP streaming is Server-Sent Events (SSE), which allows servers to push real-time updates to clients over a standard HTTP connection. Let's demonstrate building a live monitoring dashboard that streams system metrics from Python to Node.js.
Here is a FastAPI application that generates streaming metrics:
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
import asyncio
import json
import random
from datetime import datetime
app = FastAPI()
async def generate_metrics():
"""Generate fake system metrics as server-sent events"""
while True:
# Simulate collecting system metrics
metrics = {
'timestamp': datetime.now().isoformat(),
'cpu_usage': random.uniform(20, 80),
'memory_usage': random.uniform(40, 90),
'active_connections': random.randint(10, 100),
'requests_per_second': random.randint(50, 500)
}
# Format as SSE event
data = f'data: {json.dumps(metrics)}
'
yield data.encode()
# Send update every second
await asyncio.sleep(1)
@app.get('/metrics/stream')
async def stream_metrics():
"""Endpoint that streams real-time metrics"""
return StreamingResponse(
generate_metrics(),
media_type='text/event-stream',
headers={
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
}
)
@app.get('/health')
async def health_check():
"""Standard health check endpoint"""
return {'status': 'healthy', 'version': '1.0.0'}
Now, let's consume this stream from Node.js:
import { Python, Request } from '@platformatic/python-node';
const python = new Python({
docroot: './python-apps',
appTarget: 'metrics_app:app'
});
async function monitorMetrics() {
const req = new Request({ method: 'GET', url: 'http://localhost/metrics/stream' });
console.log('Connecting to metrics stream...');
const res = await python.handleStream(req);
console.log(`Status: ${res.status}`);
console.log(`Content-Type: ${res.headers.get('content-type')}`);
console.log('
Receiving metrics:
');
// Process metrics as they arrive
for await (const chunk of res) {
const lines = chunk.toString().split('
');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
console.log(`[${data.timestamp}]`);
console.log(` CPU: ${data.cpu_usage.toFixed(1)}%`);
console.log(` Memory: ${data.memory_usage.toFixed(1)}%`);
console.log(` Connections: ${data.active_connections}`);
console.log(` RPS: ${data.requests_per_second}`);
console.log();
}
}
}
}
monitorMetrics().catch(console.error);
How It Works
The streaming implementation effectively leverages the interaction between Python's async generators and Node.js's AsyncIterator pattern:
- Python Side: FastAPI's
StreamingResponseaccepts an async generator that yields chunks. Eachyielddispatches data to Rust via a Tokio DuplexStream. - ASGI Bridge: The Rust-based ASGI implementation receives
http.response.bodyevents with themore_bodyflag, queuing chunks as they arrive. - Node.js Side: The
handleStream()method returns a Response object that implements the AsyncIterator protocol. Each iteration offor await...ofreceives the next chunk.
This architecture allows Node.js to begin processing data the moment Python sends the first chunk, eliminating any waiting period for the complete response. Crucially, it also enables bidirectional backpressure, ensuring that each side operates only as fast as the other and can yield back to its respective event loop when no work needs to be done.
Real-World Use Cases for HTTP Streaming
Beyond metrics dashboards, HTTP streaming facilitates several other powerful applications:
- Large File Downloads: Stream files from Python (e.g., generated reports, media files) without loading them entirely into memory.
- AI/ML Model Outputs: Stream generated content from language models or other AI systems.
- Progressive Data Processing: Stream database query results or CSV processing as rows are processed, providing real-time feedback.
- Video/Audio Streaming: Deliver media content with Python handling processing (e.g., transcoding, filtering) and Node.js managing delivery.
WebSocket Support: Building Real-Time Applications
While HTTP streaming excels at server-to-client communication, WebSockets provide full bidirectional real-time channels. This is invaluable for chat applications, collaborative editing, live gaming, and scenarios where both client and server need to send messages independently.
Let's build a conversational AI assistant using FastAPI WebSockets:
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
import json
from datetime import datetime
app = FastAPI()
@app.websocket('/ws/assistant')
async def assistant_endpoint(websocket: WebSocket):
await websocket.accept()
# Send welcome message
await websocket.send_text('Hello! I am your AI assistant. Ask me anything or try /help for commands.')
try:
while True:
# Receive message from client
message = await websocket.receive_text()
# Simple routing based on message content
if message.startswith('/help'):
response = 'Available commands: /help, /status, /about, or ask any question'
elif message.startswith('/status'):
response = 'System status: All services operational'
elif message.startswith('/about'):
response = 'AI Assistant v1.0 - Powered by Python and Node.js'
elif message.lower() in ['hi', 'hello', 'hey']:
response = 'Hello! How can I help you today?'
elif 'time' in message.lower():
response = f'The current time is {datetime.now().strftime("%H:%M:%S")}'
else:
# Echo back with a simulated AI response
response = f'You said: "{message}". I am processing your request...'
# Send response back to client
await websocket.send_text(response)
except WebSocketDisconnect:
pass # Client disconnected
Now, let's interact with this assistant from Node.js:
import { Python, Request } from '@platformatic/python-node';
const python = new Python({
docroot: './python-apps',
appTarget: 'assistant_app:app'
});
async function runAssistant() {
// Create WebSocket request
const req = new Request({ url: 'http://localhost/ws/assistant', websocket: true });
console.log('Connecting to AI Assistant...
');
const res = await python.handleStream(req);
// Messages to send to the assistant
const messages = [
'Hello',
'/help',
'/status',
'What is the time?',
'Tell me about yourself'
];
let messageIndex = 0;
// Clean for-await loop: read response, then write next message
for await (const chunk of res) {
const response = chunk.toString();
console.log(`Assistant: ${response}
`);
// Send next message if we have more
if (messageIndex < messages.length) {
const nextMessage = messages[messageIndex];
console.log(`You: ${nextMessage}`);
await req.write(nextMessage);
messageIndex++;
} else {
// No more messages, close the connection
console.log('Closing connection...');
await req.end();
break;
}
}
}
runAssistant().catch(console.error);
How WebSocket Communication Works
The example above illustrates the clean request-response pattern enabled by WebSockets:
- Connection Establishment: Node.js creates a Request with
websocket: true. Python receives a scope withtype: 'websocket'and sendswebsocket.acceptto establish the connection. - Bidirectional Messaging: The
for-awaitloop reads from the Python server, and each iteration writes a new message back:- Python → Node.js: Python's
await websocket.send_text()delivers data to Node.js via the AsyncIterator. - Node.js → Python:
await req.write(data)sends data to Python viawebsocket.receive_text().
- Python → Node.js: Python's
- Clean Loop Pattern: Unlike background task patterns often seen in WebSocket examples, this approach uses a single synchronous-style loop where each receive is followed by a send, simplifying the flow and making it easier to understand and debug.
- Connection Lifecycle: Either side can close the connection. Python receives
websocket.disconnectevents, while Node.js closes viareq.end().
Real-World WebSocket Use Cases
WebSocket support empowers full-stack teams to build:
- Conversational AI Assistants: Develop chatbots and AI assistants in Python, exposing them via WebSocket for real-time conversations.
- Real-Time Chat and Messaging: Create interactive chat backends where Python handles message routing and business logic.
- Live Data Feeds: Stream stock prices, sports scores, IoT sensor data, or live metrics with bidirectional control.
- Interactive Commands: Implement command-line style interfaces where users send commands and receive structured responses instantly.
- Gaming: Achieve real-time multiplayer game state synchronization with player actions and server updates.
- Live Customer Support: Facilitate real-time support chat with Python AI integration for automated responses.
Real-World Integration Scenarios for Full-Stack Teams
The combination of streaming and WebSocket support enables several powerful integration patterns for teams leveraging both Python and Node.js:
Python ML Models with Real-Time Inference
Run machine learning inference in Python (using PyTorch, TensorFlow, or transformers) and expose results via WebSocket for real-time predictions. Your Node.js API gateway can manage authentication and routing, while Python handles the computationally intensive tasks.
@app.websocket('/ws/inference')
async def ml_inference(websocket: WebSocket):
await websocket.accept()
model = load_model() # Load your ML model
while True:
data = await websocket.receive_json()
result = model.predict(data['input'])
await websocket.send_json({'prediction': result})
Progressive Data Processing
Stream large dataset processing results back to Node.js as they are computed, enabling features like progress bars, partial result display, or early termination.
@app.get('/process/dataset')
async def process_dataset():
async def process_stream():
for batch in dataset.batches(size=1000):
result = process_batch(batch)
yield json.dumps(result).encode() + b'
'
return StreamingResponse(process_stream())
Hybrid API Gateway
Utilize Node.js as your primary API gateway for tasks like authentication, rate limiting, and routing, while seamlessly leveraging Python's rich ecosystem for specific endpoints that require streaming or WebSocket capabilities.
Existing Python Tools with WebSocket Interfaces
Wrap existing Python command-line tools or libraries with FastAPI WebSocket endpoints, making them accessible to your Node.js infrastructure without requiring a complete rewrite.
Getting Started and Migration Guide
Installation
Upgrade to v2.0.0 via npm:
npm install @platformatic/python-node@latest
Or with yarn:
yarn add @platformatic/python-node@latest
Choosing Between handleRequest() and handleStream()
The API now offers two distinct methods for handling requests:
Use handleRequest() when:
- Response bodies are small and fit comfortably in memory.
- You would need to buffer the complete response body anyway.
- Backward compatibility with existing code is required.
const res = await python.handleRequest(req);
console.log(res.body.toString()); // Body available immediately
Use handleStream() when:
- Responses are large or potentially unbounded.
- You need access to headers before the body completes.
- Implementing Server-Sent Events or other streaming protocols.
- Building WebSocket applications.
- Memory efficiency is a critical concern.
const res = await python.handleStream(req);
console.log(res.body); // `undefined` for streams!
console.log(res.status); // Headers available immediately
for await (const chunk of res) {
// Process chunks incrementally
}
Migration Checklist
Existing applications that use handleRequest() should largely continue to work without changes. The single exception is that the req.body setter and getter are no longer available. To adopt streaming:
- Identify endpoints that would benefit from streaming (e.g., large responses, real-time data, WebSockets).
- Switch those specific endpoints to use
handleStream(). - Update response handling to utilize
for await...ofiteration. - Test thoroughly, paying close attention to error handling during streaming.
- Monitor memory usage to verify the expected streaming benefits.
Documentation and Resources
- Full documentation: python-node GitHub repository
- ASGI specification: asgi.readthedocs.io
- FastAPI streaming: FastAPI Advanced User Guide
- FastAPI WebSockets: FastAPI WebSockets
Conclusion
The integration of HTTP streaming and WebSocket support in @platformatic/python-node v2.0.0 significantly expands the possibilities for full-stack teams building with Python and Node.js. This release extends the existing powerful integration to cover real-time, high-performance use cases that previously necessitated complex workarounds.
Whether you are developing live dashboards, interactive chat applications, progressive data processing pipelines, or exposing Python ML models via WebSocket APIs, v2.0.0 provides the essential foundation while maintaining backward compatibility with existing code. The implementation strictly adheres to the ASGI 3.0 specification, guaranteeing compatibility with the broader Python asynchronous ecosystem, including FastAPI, Starlette, Django Channels, and other ASGI-compliant frameworks. Combined with the ability to run Python directly within Node.js processes, this release offers a practical and powerful option for teams aiming to leverage the strengths of both ecosystems.
We encourage you to try out v2.0.0 today and share your feedback. Should you encounter any issues or have questions, please open an issue on our GitHub repository.