-
Notifications
You must be signed in to change notification settings - Fork 55
gRPC server for Bicep #1330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
gRPC server for Bicep #1330
Conversation
2e40dc5 to
c3a918b
Compare
While modern Windows technically supports UDS, many libraries (like Rust's Tokio) do not. So unfortunately is much more straightforward to always use named pipes on Windows. ## Description <!-- Provide a 1-2 sentence description of your change --> ## Example Usage PowerShell/DSC#1330 ## Checklist - [x] I have read and adhere to the [contribution guide](https://github.com/Azure/bicep/blob/main/CONTRIBUTING.md). ###### Microsoft Reviewers: [Open in CodeFlow](https://microsoft.github.io/open-pr/?codeflow=https://github.com/Azure/bicep/pull/18712) --------- Co-authored-by: Anthony Martin <[email protected]>
712aeed to
46d1cea
Compare
46d1cea to
c8c673f
Compare
This is imported from: https://github.com/Azure/bicep/blob/main/src/Bicep.Local.Rpc/extension.proto There may be a better way to sync this dependency, but as far as I can tell they're usually just copied like this.
And tonic-prost, and tonic-prost-build... Adds a simple build.rs script for compiling the Protobuf files. Requires the protoc binary: https://protobuf.dev/installation/
Copilot was helpful, then reconciled against this example: https://github.com/hyperium/tonic/blob/master/examples/src/routeguide/server.rs
Since the Bicep extension is very unforgiving on the CLI.
Until Bicep will actually send this.
Upgraded to a "Resource not found" error!
Since the gRPC (and MCP) servers already start one.
Now output works!
60b2211 to
73f3692
Compare
73f3692 to
612615f
Compare
andyleejordan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notes for reviewers.
| - name: Add WinGet links to PATH | ||
| if: matrix.platform == 'windows-latest' | ||
| run: | | ||
| "$env:LOCALAPPDATA\\Microsoft\\WinGet\\Links" | Out-File -Append -Encoding utf8 -FilePath $env:GITHUB_PATH |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was necessary for Protobuf because protoc.exe is made available by WinGet in this local app data path, which on GitHub images is not already in the PATH, and that is unlike the rest of our dependencies (Node.js is already installed system-wide and so in the PATH, tree-sitter isn't installed by installs via Cargo, this was our first installation on GitHub Actions using WinGet that doesn't run a system installer).
| - pwsh: | | ||
| apt update | ||
| apt -y install musl-tools rpm dpkg build-essential | ||
| apt -y install musl-tools rpm dpkg build-essential protobuf-compiler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only have a loose understanding of the state of the OneBranch pipeline for this project. The build module will run the same apt install (and that is the correct package name) but I'm inferring this is the "restore" phase where it needs to be run? Or is this just for bootstrapping Rust? This package does eventually have to be installed.
| @@ -0,0 +1,66 @@ | |||
| syntax = "proto3"; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd put a header and a comment in this file except that it's verbatim copied from Bicep.
| .set(&properties, false, &ExecutionKind::Actual) | ||
| .map_err(|e| Status::aborted(e.to_string()))? | ||
| else { | ||
| return Err(Status::unimplemented("Group resources not supported")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This won't in general be hit due to the extensibility API calling this function on a per resource basis, but I have to unwrap the SetResult return struct so that's the error to satisfy the typing of the function.
| properties: result.after_state.to_string(), | ||
| status: None, | ||
| }), | ||
| error_data: None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once in use I suspect we may want to transform the error data a little better, though returning Err above works just fine.
| #[cfg(windows)] | ||
| if let Some(pipe_name) = pipe { | ||
| // TODO: This named pipe code is messy and honestly mostly generated. It | ||
| // does work, but most of the problem lies in minimal Windows support | ||
| // inside the Tokio library (and no support for UDS). | ||
| use std::pin::Pin; | ||
| use std::task::{Context, Poll}; | ||
| use tokio::io::{AsyncRead, AsyncWrite}; | ||
| use tokio::net::windows::named_pipe::ServerOptions; | ||
| use tonic::transport::server::Connected; | ||
|
|
||
| // Wrapper to implement Connected trait for NamedPipeServer | ||
| struct NamedPipeConnection(tokio::net::windows::named_pipe::NamedPipeServer); | ||
|
|
||
| impl Connected for NamedPipeConnection { | ||
| type ConnectInfo = (); | ||
|
|
||
| fn connect_info(&self) -> Self::ConnectInfo { | ||
| () | ||
| } | ||
| } | ||
|
|
||
| impl AsyncRead for NamedPipeConnection { | ||
| fn poll_read( | ||
| mut self: Pin<&mut Self>, | ||
| cx: &mut Context<'_>, | ||
| buf: &mut tokio::io::ReadBuf<'_>, | ||
| ) -> Poll<std::io::Result<()>> { | ||
| Pin::new(&mut self.0).poll_read(cx, buf) | ||
| } | ||
| } | ||
|
|
||
| impl AsyncWrite for NamedPipeConnection { | ||
| fn poll_write( | ||
| mut self: Pin<&mut Self>, | ||
| cx: &mut Context<'_>, | ||
| buf: &[u8], | ||
| ) -> Poll<std::io::Result<usize>> { | ||
| Pin::new(&mut self.0).poll_write(cx, buf) | ||
| } | ||
|
|
||
| fn poll_flush( | ||
| mut self: Pin<&mut Self>, | ||
| cx: &mut Context<'_>, | ||
| ) -> Poll<std::io::Result<()>> { | ||
| Pin::new(&mut self.0).poll_flush(cx) | ||
| } | ||
|
|
||
| fn poll_shutdown( | ||
| mut self: Pin<&mut Self>, | ||
| cx: &mut Context<'_>, | ||
| ) -> Poll<std::io::Result<()>> { | ||
| Pin::new(&mut self.0).poll_shutdown(cx) | ||
| } | ||
| } | ||
|
|
||
| // Windows named pipes must be in the format \\.\pipe\{name} | ||
| let full_pipe_path = format!(r"\\.\pipe\{}", pipe_name); | ||
| tracing::info!("Starting Bicep gRPC server on named pipe: {full_pipe_path}"); | ||
|
|
||
| // Create a stream that accepts connections on the named pipe | ||
| let incoming = async_stream::stream! { | ||
| // Track whether this is the first instance | ||
| let mut is_first = true; | ||
|
|
||
| loop { | ||
| let pipe = if is_first { | ||
| ServerOptions::new() | ||
| .first_pipe_instance(true) | ||
| .create(&full_pipe_path) | ||
| } else { | ||
| ServerOptions::new() | ||
| .create(&full_pipe_path) | ||
| }; | ||
|
|
||
| let server = match pipe { | ||
| Ok(server) => server, | ||
| Err(e) => { | ||
| tracing::error!("Failed to create named pipe: {}", e); | ||
| break; | ||
| } | ||
| }; | ||
|
|
||
| is_first = false; | ||
|
|
||
| tracing::debug!("Waiting for client to connect to named pipe..."); | ||
| match server.connect().await { | ||
| Ok(()) => { | ||
| tracing::info!("Client connected to named pipe"); | ||
| yield Ok::<_, std::io::Error>(NamedPipeConnection(server)); | ||
| } | ||
| Err(e) => { | ||
| tracing::error!("Failed to accept connection: {}", e); | ||
| break; | ||
| } | ||
| } | ||
| } | ||
| }; | ||
|
|
||
| Server::builder() | ||
| .add_service(BicepExtensionServer::new(service)) | ||
| .serve_with_incoming(incoming) | ||
| .await?; | ||
|
|
||
| return Ok(()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the first comment. This works, it's just a lot more verbose than I would've liked as I did indeed have to relay on Copilot to generate the boilerplate necessary to map one interface to another. Since the Rust community has been arguing about Unix socket support on Windows for half a decade, it's not coming, so we have to use named pipes, which is more Windows-native, but not nearly as well supported in Rust/tonic/tokio. If this can be simplified, please let me know. I was just happy to get it working.
| // Default to HTTP server on [::1]:50051 if no transport specified | ||
| let addr = http.unwrap_or_else(|| "[::1]:50051".to_string()).parse()?; | ||
| tracing::info!("Starting Bicep gRPC server on HTTP: {addr}"); | ||
|
|
||
| let reflection_service = tonic_reflection::server::Builder::configure() | ||
| .register_encoded_file_descriptor_set(proto::FILE_DESCRIPTOR_SET) | ||
| .build_v1() | ||
| .unwrap(); | ||
|
|
||
| Server::builder() | ||
| .add_service(reflection_service) | ||
| .add_service(BicepExtensionServer::new(service)) | ||
| .serve(addr) | ||
| .await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The aforementioned "debug" HTTP server where requests can be sent to DSC over gRPC without Bicep at all (which is really neat).
|
|
||
| tonic_prost_build::configure() | ||
| .build_client(false) | ||
| .file_descriptor_set_path(&descriptor_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is used for the reflection in the HTTP server.
| [package] | ||
| name = "dscbicep" | ||
| version = "0.1.0" | ||
| edition = "2021" | ||
|
|
||
| [[bin]] | ||
| name = "dscbicep" | ||
| path = "src/main.rs" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no strong opinions on the name, version, or file structure other than suggesting that changes be made after this PR, as the extension has to incorporate the binary.
| // Try to use existing runtime first (e.g. from gRPC or MCP server) | ||
| match tokio::runtime::Handle::try_current() { | ||
| Ok(handle) => { | ||
| tokio::task::block_in_place(|| { | ||
| handle.block_on(run_async) | ||
| }) | ||
| }, | ||
| // Otherwise create a new runtime | ||
| Err(_) => { | ||
| tokio::runtime::Builder::new_multi_thread() | ||
| .enable_all() | ||
| .build() | ||
| .unwrap() | ||
| .block_on(run_async) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only real change inside of DSC. Since under gRPC there's already a runtime, we need to re-use it (and tell it that we're spawning a blocking task so that it manages the threads correctly and doesn't deadlock). Whereas when the DSC CLI invokes these resources, it spins up a runtime just for the sake of threading the processes (yes I know that sounds funny).
This adds a new binary
dscbicepto the DSC build which is a small, self-contained Rust executable hosting a gRPC server. The methods are defined inbicep.protofrom Bicep's extensibility APIs (hence the addition ofprotocas a build dependency). It's actually fairly similar to the MCP server, in that it's simply invoking find/get/set/delete resource functions viadsc-lib.Using
bicep local-deploywith this is actually done through a "Bicep extension" as defined in the project bicep-types-dsc. In essence, it's an OCI artifact with two things: the type information in a big JSON file and thisdscbicepbinary. When all put together and hosted on an OCI registry, Bicep will auto-restore the extension and just deploy the DSC resources defined in any arbitrary set of Bicep files, handling all Bicep functionality and orchestration itself.