Skip to content

Conversation

@andyleejordan
Copy link
Member

@andyleejordan andyleejordan commented Dec 18, 2025

This adds a new binary dscbicep to the DSC build which is a small, self-contained Rust executable hosting a gRPC server. The methods are defined in bicep.proto from Bicep's extensibility APIs (hence the addition of protoc as a build dependency). It's actually fairly similar to the MCP server, in that it's simply invoking find/get/set/delete resource functions via dsc-lib.

Using bicep local-deploy with this is actually done through a "Bicep extension" as defined in the project bicep-types-dsc. In essence, it's an OCI artifact with two things: the type information in a big JSON file and this dscbicep binary. When all put together and hosted on an OCI registry, Bicep will auto-restore the extension and just deploy the DSC resources defined in any arbitrary set of Bicep files, handling all Bicep functionality and orchestration itself.

anthony-c-martin added a commit to Azure/bicep that referenced this pull request Jan 5, 2026
While modern Windows technically supports UDS, many libraries (like
Rust's Tokio) do not. So unfortunately is much more straightforward to
always use named pipes on Windows.

## Description

<!-- Provide a 1-2 sentence description of your change -->

## Example Usage
PowerShell/DSC#1330

## Checklist

- [x] I have read and adhere to the [contribution
guide](https://github.com/Azure/bicep/blob/main/CONTRIBUTING.md).

###### Microsoft Reviewers: [Open in
CodeFlow](https://microsoft.github.io/open-pr/?codeflow=https://github.com/Azure/bicep/pull/18712)

---------

Co-authored-by: Anthony Martin <[email protected]>
@andyleejordan andyleejordan force-pushed the bicep-gRPC branch 5 times, most recently from 712aeed to 46d1cea Compare January 13, 2026 23:07
@andyleejordan andyleejordan changed the title WIP: gRPC server for Bicep gRPC server for Bicep Jan 13, 2026
This is imported from:
https://github.com/Azure/bicep/blob/main/src/Bicep.Local.Rpc/extension.proto

There may be a better way to sync this dependency,
but as far as I can tell they're usually just copied like this.
And tonic-prost, and tonic-prost-build...
Adds a simple build.rs script for compiling the Protobuf files.
Requires the protoc binary:

https://protobuf.dev/installation/
Since the Bicep extension is very unforgiving on the CLI.
Until Bicep will actually send this.
Upgraded to a "Resource not found" error!
Since the gRPC (and MCP) servers already start one.
@andyleejordan andyleejordan force-pushed the bicep-gRPC branch 2 times, most recently from 60b2211 to 73f3692 Compare January 14, 2026 20:43
@andyleejordan andyleejordan marked this pull request as ready for review January 14, 2026 20:54
Copy link
Member Author

@andyleejordan andyleejordan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Notes for reviewers.

Comment on lines +29 to +32
- name: Add WinGet links to PATH
if: matrix.platform == 'windows-latest'
run: |
"$env:LOCALAPPDATA\\Microsoft\\WinGet\\Links" | Out-File -Append -Encoding utf8 -FilePath $env:GITHUB_PATH
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was necessary for Protobuf because protoc.exe is made available by WinGet in this local app data path, which on GitHub images is not already in the PATH, and that is unlike the rest of our dependencies (Node.js is already installed system-wide and so in the PATH, tree-sitter isn't installed by installs via Cargo, this was our first installation on GitHub Actions using WinGet that doesn't run a system installer).

- pwsh: |
apt update
apt -y install musl-tools rpm dpkg build-essential
apt -y install musl-tools rpm dpkg build-essential protobuf-compiler
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only have a loose understanding of the state of the OneBranch pipeline for this project. The build module will run the same apt install (and that is the correct package name) but I'm inferring this is the "restore" phase where it needs to be run? Or is this just for bootstrapping Rust? This package does eventually have to be installed.

@@ -0,0 +1,66 @@
syntax = "proto3";
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd put a header and a comment in this file except that it's verbatim copied from Bicep.

.set(&properties, false, &ExecutionKind::Actual)
.map_err(|e| Status::aborted(e.to_string()))?
else {
return Err(Status::unimplemented("Group resources not supported"));
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't in general be hit due to the extensibility API calling this function on a per resource basis, but I have to unwrap the SetResult return struct so that's the error to satisfy the typing of the function.

properties: result.after_state.to_string(),
status: None,
}),
error_data: None,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once in use I suspect we may want to transform the error data a little better, though returning Err above works just fine.

Comment on lines +243 to +348
#[cfg(windows)]
if let Some(pipe_name) = pipe {
// TODO: This named pipe code is messy and honestly mostly generated. It
// does work, but most of the problem lies in minimal Windows support
// inside the Tokio library (and no support for UDS).
use std::pin::Pin;
use std::task::{Context, Poll};
use tokio::io::{AsyncRead, AsyncWrite};
use tokio::net::windows::named_pipe::ServerOptions;
use tonic::transport::server::Connected;

// Wrapper to implement Connected trait for NamedPipeServer
struct NamedPipeConnection(tokio::net::windows::named_pipe::NamedPipeServer);

impl Connected for NamedPipeConnection {
type ConnectInfo = ();

fn connect_info(&self) -> Self::ConnectInfo {
()
}
}

impl AsyncRead for NamedPipeConnection {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> Poll<std::io::Result<()>> {
Pin::new(&mut self.0).poll_read(cx, buf)
}
}

impl AsyncWrite for NamedPipeConnection {
fn poll_write(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<std::io::Result<usize>> {
Pin::new(&mut self.0).poll_write(cx, buf)
}

fn poll_flush(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<std::io::Result<()>> {
Pin::new(&mut self.0).poll_flush(cx)
}

fn poll_shutdown(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<std::io::Result<()>> {
Pin::new(&mut self.0).poll_shutdown(cx)
}
}

// Windows named pipes must be in the format \\.\pipe\{name}
let full_pipe_path = format!(r"\\.\pipe\{}", pipe_name);
tracing::info!("Starting Bicep gRPC server on named pipe: {full_pipe_path}");

// Create a stream that accepts connections on the named pipe
let incoming = async_stream::stream! {
// Track whether this is the first instance
let mut is_first = true;

loop {
let pipe = if is_first {
ServerOptions::new()
.first_pipe_instance(true)
.create(&full_pipe_path)
} else {
ServerOptions::new()
.create(&full_pipe_path)
};

let server = match pipe {
Ok(server) => server,
Err(e) => {
tracing::error!("Failed to create named pipe: {}", e);
break;
}
};

is_first = false;

tracing::debug!("Waiting for client to connect to named pipe...");
match server.connect().await {
Ok(()) => {
tracing::info!("Client connected to named pipe");
yield Ok::<_, std::io::Error>(NamedPipeConnection(server));
}
Err(e) => {
tracing::error!("Failed to accept connection: {}", e);
break;
}
}
}
};

Server::builder()
.add_service(BicepExtensionServer::new(service))
.serve_with_incoming(incoming)
.await?;

return Ok(());
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the first comment. This works, it's just a lot more verbose than I would've liked as I did indeed have to relay on Copilot to generate the boilerplate necessary to map one interface to another. Since the Rust community has been arguing about Unix socket support on Windows for half a decade, it's not coming, so we have to use named pipes, which is more Windows-native, but not nearly as well supported in Rust/tonic/tokio. If this can be simplified, please let me know. I was just happy to get it working.

Comment on lines +350 to +363
// Default to HTTP server on [::1]:50051 if no transport specified
let addr = http.unwrap_or_else(|| "[::1]:50051".to_string()).parse()?;
tracing::info!("Starting Bicep gRPC server on HTTP: {addr}");

let reflection_service = tonic_reflection::server::Builder::configure()
.register_encoded_file_descriptor_set(proto::FILE_DESCRIPTOR_SET)
.build_v1()
.unwrap();

Server::builder()
.add_service(reflection_service)
.add_service(BicepExtensionServer::new(service))
.serve(addr)
.await?;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The aforementioned "debug" HTTP server where requests can be sent to DSC over gRPC without Bicep at all (which is really neat).


tonic_prost_build::configure()
.build_client(false)
.file_descriptor_set_path(&descriptor_path)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is used for the reflection in the HTTP server.

Comment on lines +1 to +8
[package]
name = "dscbicep"
version = "0.1.0"
edition = "2021"

[[bin]]
name = "dscbicep"
path = "src/main.rs"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no strong opinions on the name, version, or file structure other than suggesting that changes be made after this PR, as the extension has to incorporate the binary.

Comment on lines +785 to +800
// Try to use existing runtime first (e.g. from gRPC or MCP server)
match tokio::runtime::Handle::try_current() {
Ok(handle) => {
tokio::task::block_in_place(|| {
handle.block_on(run_async)
})
},
// Otherwise create a new runtime
Err(_) => {
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(run_async)
}
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only real change inside of DSC. Since under gRPC there's already a runtime, we need to re-use it (and tell it that we're spawning a blocking task so that it manages the threads correctly and doesn't deadlock). Whereas when the DSC CLI invokes these resources, it spins up a runtime just for the sake of threading the processes (yes I know that sounds funny).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant