This article is the next part of Brainfuck interpreter implementation. We built the library that parses and executes Brainfuck language in the previous part. In this part, we will use this library in the CLI program and deploy the code to AWS Lambda. Let’s begin with the CLI.

CLI program implementaiton

Firstly, we need to create a new application in our workspace. In the terminal, go to the root workspace directory and run this command:

cargo new brainfuck_cli

Then, add a new application to the root Cargo.toml


members = [

Dependency definition

I highly recommend using the clap library for building CLI applications. We can also use anyhow to interact with any error type easier. You should use thiserror for libraries and anyhow for applications as a rule of thumb.

The last line in the dependencies uses our brainfuck_interpreter library in the same workspace.

clap = { version = "3.1.5", features = ["derive"] }
anyhow = "1.0"
brainfuck_interpreter = { path = "../brainfuck_interpreter" }

Application code

CLI arguments definition

clap crate allows us to define needed arguments by structure. Here how it looks likes:

use clap::Parser;

#[derive(Parser, Debug)]
#[clap(author, version, about, long_about = None)]
struct Args {
    source_path: String,

Our program needs the path to the source. We will read the file’s content and execute it on the fly.

To read arguments in the main method, we should do this:

use anyhow::Result;

fn main() -> Result<()> {
    let args = Args::parse();
    // Other code

Note, we use anyhow::Result as the returned value from main. It will handle any error without the need to write different handlers for each type of error.

Main application code

To read the file’s content, we can use the standard library method fs::read_to_string(args.source_path). We need to use io::stdin as input: io::Read argument to interpret method. After that, we can write all the output to stdout.

fn main() -> Result<()> {
    let args = Args::parse();
    let source = fs::read_to_string(args.source_path)?;
    let stdin = io::stdin();

    let result = interpret(&source, Box::new(stdin))?;


CLI Testing

Let’s execute the program:

cargo run

You will receive this output:

error: The following required arguments were not provided:

    brainfuck_cli <SOURCE_PATH>

For more information try --help

Let’s see the help page:

cargo run -- --help


brainfuck_cli 0.1.0

    brainfuck_cli <SOURCE_PATH>


    -h, --help       Print help information
    -V, --version    Print version information

Note how descriptive and easy to use our application is. clap library provided best practices of CLI software development for free.

To run a sample Hello world application located at brainfuck_examples/, you should execute this command:

cargo run -- ../brainfuck_examples/
Hello World!

A more sophisticated example,, will output everything from the input. You should start the program, type anything and then press CTRL + D in Linux/Mac or CTRL + Z in Windows to finish stdin:

cargo run -- ../brainfuck_examples/
I love rust[CTRL+D]
I love rust

You can also redirect stdin from other files:

echo "blah blah" | cargo run -- ../brainfuck_examples/
blah blah

AWS Lambda program implementaiton

This application will run inside the AWS Lambda, accept HTTP request events and return HTTP responses. To work correctly, we will configure API Gateway. It will redirect all the requests to AWS Lambda and return responses to the user.

Follow the same steps from CLI to create the brainfuck_aws application inside the workspace:

cargo new brainfuck_aws

Dependency definition

To work with API Gateway and lambda, we should use the official lambda_http crate. It has all the required structures to work with them. We also need to serialize and deserialize all the inputs/outputs. We use serde and serde_json for that. Tokio is used to provide asynchronous I/O runtime for our application. Lastly, we use env_logger and log to provide logging functionality. The runtime will send them to Cloudwatch.

brainfuck_interpreter = { path = '../brainfuck_interpreter' }
lambda_http = "0.5"
serde_json = "1.0"
serde = "1.0"
tokio = "1.0"
env_logger = "0.9"
log = "0.4"

Application code

Request and response types

Our request should contain the source code and the input:

struct InterpreterRequest {
    source: String,
    input: Option<String>,

The response should return either the result of successful execution or an error message:

#[derive(Serialize, Debug)]
#[serde(rename_all = "lowercase")]
enum InterpreterResponse {

Main code

During the startup, we should enable logging and start lambda runtime:

async fn main() -> Result<(), Error> {

    info!("Starting lambda function!");

    let func = service_fn(func);

Lambda runtime will execute our func function every time on event fire. This function should process the request. Depending on the interpreter execution result, it will return either 200 OK with the stdout, or 400 BAD REQUEST with the error description:

async fn func(event: Request) -> Result<impl IntoResponse, Error> {
    debug!("Received request: {:?}", event);
    info!("Processing request!");

    match process_request(event).await {
        Ok(result) => Ok(Response::builder()
        Err(error) => Ok(Response::builder().status(400).body(serde_json::to_string(

Before the execution, we should parse the payload. If it’s not a valid payload, we can return an error. Otherwise, we can start code parsing and execution. In case of any error’s inside the brainfuck library, we will return them; otherwise, we will return stdout:

async fn process_request(request: Request) -> Result<InterpreterResponse, Error> {
    if let Some(request) = request.payload::<InterpreterRequest>()? {
        debug!("Body is valid. Processing request");
        let source = request.source;
        let input = request.input.unwrap_or(String::new());
        let stdin = Box::new(input.as_bytes());

        let result = match interpret(&source, stdin) {
            Ok(output) => InterpreterResponse::Success(output),
            Err(error) => InterpreterResponse::Error(error.to_string()),
        info!("Interpreter result: {:?}", result);

    } else {
        warn!("Can't process request. Invalid body");
        Err("Invalid body")?

Template definition

We will use SAM (Serverless application mode) to simplify our deployment to AWS. SAM allows specifying only the function you need and required endpoints. Under the hood, it will create API Gateway, AWS Lambda, IAM Roles, Databases, etc.

In this example, we need AWS Lambda that API Gateway will call. It will be called BrainfuckFunction, use x86_64 architecture and Amazon Linux 2 runtime. API Gateway will provide a POST /brainfuck endpoint that triggers AWS Lambda.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

    Type: AWS::Serverless::Function
      MemorySize: 128
      Architectures: ["x86_64"]
      Handler: bootstrap
      Runtime: provided.al2
      Timeout: 5
      CodeUri: .
          Type: Api # More info about API Event Source:
            Path: /brainfuck
            Method: post

    Description: "API Gateway endpoint URL for Prod stage for Brainfuck function"
    Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}"
    Description: "Brainfuck World Lambda Function ARN"
    Value: !GetAtt BrainfuckFunction.Arn
    Description: "Implicit IAM Role created for Brainfuck World function"
    Value: !GetAtt BrainfuckFunctionRole.Arn


We need to create a makefile file to simplify our work. We can run make in the console, and it will compile our application and prepare it to be deployed to AWS Lambda.

Note that we are building an application with x86_64-unknown-linux-musl architecture. It’s because AWS Lambda requires us to use it.

The process of building is the following:

  • Add x86_64-unknown-linux-musl target to rustup.
  • Compile application with musl architecture.
  • sam build will execute build-BrainfuckFunction internally.
  • Copy compiled binary into the output directory.
  • Discards symbols from compiled binary to reduce size.
ARCH = x86_64-unknown-linux-musl
ROOT_DIR = $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))

	rustup target add $(ARCH)
	cargo build --target $(ARCH) --release --target-dir ../target
	sam build

	cp $(ROOT_DIR)/../target/$(ARCH)/release/brainfuck_aws $(ARTIFACTS_DIR)/bootstrap
	strip $(ARTIFACTS_DIR)/bootstrap


AWS CLI configuration

Almost everything is done. You need to download and install AWS CLI and SAM CLI. Then you need to configure the AWS client before deployment:

aws configure

You should set up your access key id and secret access key. You should do it once. Then it will persist locally.

SAM deployment

To initiate deployment for the first time, you should run this command:

sam deploy --guided

It will prompt you about the function name, region, etc. After that, it will create a local file, samconfig.toml. The next time you can use sam deploy command.

After initial configuration, it will create all the resources and deploy our application. You should get similar output to this:

CloudFormation events from stack operations
ResourceStatus                           ResourceType                             LogicalResourceId                        ResourceStatusReason                   
CREATE_IN_PROGRESS                       AWS::IAM::Role                           BrainfuckFunctionRole                    -                                      
CREATE_IN_PROGRESS                       AWS::IAM::Role                           BrainfuckFunctionRole                    Resource creation Initiated            
CREATE_COMPLETE                          AWS::IAM::Role                           BrainfuckFunctionRole                    -                                      
CREATE_IN_PROGRESS                       AWS::Lambda::Function                    BrainfuckFunction                        -                                      
CREATE_IN_PROGRESS                       AWS::Lambda::Function                    BrainfuckFunction                        Resource creation Initiated            
CREATE_COMPLETE                          AWS::Lambda::Function                    BrainfuckFunction                        -                                      
CREATE_IN_PROGRESS                       AWS::ApiGateway::RestApi                 ServerlessRestApi                        -                                      
CREATE_IN_PROGRESS                       AWS::ApiGateway::RestApi                 ServerlessRestApi                        Resource creation Initiated            
CREATE_COMPLETE                          AWS::ApiGateway::RestApi                 ServerlessRestApi                        -                                      
CREATE_IN_PROGRESS                       AWS::ApiGateway::Deployment              ServerlessRestApiDeployment683b01a6bf    -                                      
CREATE_IN_PROGRESS                       AWS::Lambda::Permission                  BrainfuckFunctionBrainfuckPermissionPr   -                                      
CREATE_IN_PROGRESS                       AWS::Lambda::Permission                  BrainfuckFunctionBrainfuckPermissionPr   Resource creation Initiated            
CREATE_COMPLETE                          AWS::ApiGateway::Deployment              ServerlessRestApiDeployment683b01a6bf    -                                      
CREATE_IN_PROGRESS                       AWS::ApiGateway::Deployment              ServerlessRestApiDeployment683b01a6bf    Resource creation Initiated            
CREATE_IN_PROGRESS                       AWS::ApiGateway::Stage                   ServerlessRestApiProdStage               -                                      
CREATE_IN_PROGRESS                       AWS::ApiGateway::Stage                   ServerlessRestApiProdStage               Resource creation Initiated            
CREATE_COMPLETE                          AWS::Lambda::Permission                  BrainfuckFunctionBrainfuckPermissionPr   -                                      
CREATE_COMPLETE                          AWS::ApiGateway::Stage                   ServerlessRestApiProdStage               -                                      
CREATE_COMPLETE                          AWS::CloudFormation::Stack               brainfuck                                -                                      

CloudFormation outputs from deployed stack
Key                 BrainfuckFunctionIamRole                                                                                                                       
Description         Implicit IAM Role created for Brainfuck World function                                                                                         
Value               arn:aws:iam::085583328641:role/brainfuck-BrainfuckFunctionRole-10GJIGA9HAMOU                                                                   

Key                 RestApi                                                                                                                                        
Description         API Gateway endpoint URL for Prod stage for Brainfuck function                                                                                 

Key                 BrainfuckFunction                                                                                                                              
Description         Brainfuck World Lambda Function ARN                                                                                                            
Value               arn:aws:lambda:us-east-1:085583328641:function:brainfuck-BrainfuckFunction-Hh5Y3dLqjbey                                                        

That means our application is deployed successfully!

Application testing

To test the application, we will use curl:

curl -X POST -H "Content-Type: application/json" -d '{"source":",[.,]","input":"hello"}' | jq

You should see something like this:

  "success": "hello"

The application will correctly handle errors and return the error message:

curl -X POST -H "Content-Type: application/json" -d '{"source":",[.,"}'  | jq
  "error": "Error parsing source: `Expected end of loop`"
curl -X POST -H "Content-Type: application/json" -d '{"INPUT":""}'  | jq
  "error": "failed to parse payload from application/json missing field `source` at line 1 column 12\n"

You can also access AWS Cloudwatch to see the generated logs.

START RequestId: b79d600c-3eb1-4623-9072-b19aefecb5b1 Version: $LATEST
END RequestId: b79d600c-3eb1-4623-9072-b19aefecb5b1
REPORT RequestId: b79d600c-3eb1-4623-9072-b19aefecb5b1	Duration: 1.40 ms	Billed Duration: 28 ms	Memory Size: 128 MB	Max Memory Used: 12 MB	Init Duration: 26.36 ms	
START RequestId: 23023dfd-31dd-4b28-bbe3-ad52ef805d29 Version: $LATEST
END RequestId: 23023dfd-31dd-4b28-bbe3-ad52ef805d29
REPORT RequestId: 23023dfd-31dd-4b28-bbe3-ad52ef805d29	Duration: 1.18 ms	Billed Duration: 2 ms	Memory Size: 128 MB	Max Memory Used: 12 MB	
START RequestId: 42f09c1b-b33c-4b45-8ab3-47489ec4e7ee Version: $LATEST
END RequestId: 42f09c1b-b33c-4b45-8ab3-47489ec4e7ee
REPORT RequestId: 42f09c1b-b33c-4b45-8ab3-47489ec4e7ee	Duration: 1.05 ms	Billed Duration: 2 ms	Memory Size: 128 MB	Max Memory Used: 13 MB	

It will not print any logs for us by default because info and debug logs are ignored. However, you can modify the behavior without redeploying the code!

You should follow the steps:

  • Open AWS Lambda.
  • Open your Brainfuck function.
  • Go to configuration.
  • Open environment variables tab.
  • Click edit.
  • Enter RUST_LOG as a key and DEBUG as a value.
  • Save.

Now you should be able to see all the logs generated by our application.

[2022-03-17T11:33:54Z INFO  brainfuck_aws] Processing request!
[2022-03-17T11:33:54Z DEBUG brainfuck_aws] Body is valid. Processing request
[2022-03-17T11:33:54Z INFO  brainfuck_aws] Interpreter result: Success("hello")
[2022-03-17T11:33:54Z DEBUG hyper::client::pool] reuse idle connection for ("http",
[2022-03-17T11:33:54Z DEBUG hyper::proto::h1::io] flushed 283 bytes
[2022-03-17T11:33:54Z DEBUG hyper::proto::h1::io] parsed 3 headers
[2022-03-17T11:33:54Z DEBUG hyper::proto::h1::conn] incoming body is content-length (16 bytes)
[2022-03-17T11:33:54Z DEBUG hyper::proto::h1::conn] incoming body completed
[2022-03-17T11:33:54Z DEBUG hyper::client::pool] pooling idle connection for ("http",
[2022-03-17T11:33:54Z DEBUG hyper::client::pool] reuse idle connection for ("http",
[2022-03-17T11:33:54Z DEBUG hyper::proto::h1::io] flushed 109 bytes
END RequestId: 549acd63-0bf0-498c-896a-c18e82c9b53e
REPORT RequestId: 549acd63-0bf0-498c-896a-c18e82c9b53e	Duration: 1.62 ms	Billed Duration: 31 ms	Memory Size: 128 MB	Max Memory Used: 13 MB	Init Duration: 28.76 ms	
START RequestId: 2f28bf67-da9a-4a85-bd82-f901370e3fb0 Version: $LATEST
[2022-03-17T11:33:55Z DEBUG hyper::proto::h1::io] parsed 7 headers
[2022-03-17T11:33:55Z DEBUG hyper::proto::h1::conn] incoming body is chunked encoding


This article continues Brainfuck interpreter series. We implemented CLI and AWS Lambda applications that used the brainfuck_interpreter library from the previous tutorial. It presents to you that Rust is flexible, and the same code can be used anywhere!

You can deploy AWS Lambda application easily with the help of SAM CLI. You can configure the behavior of your application by editing *environment variablesin yourLambda`. Of course, you can deploy your application manually, but be aware that you need to compile it to musl architecture because of the runtime.

You can view the final code in my public repo.