Memphis

Introducing Memphis Functions: A Faster, Easier, and Smarter Way To Do Event-driven And Stream Processing

Yaniv Ben Hemo October 22, 2023 3 min read

Memphis functions is the future of event-driven

The story

Organizations are increasingly embracing real-time event processing, intercepting data streams before they enter the data warehouse, and embracing event-driven architectural paradigms. However, they must contend with the ever-evolving landscape of data and technology. Development teams face the challenge of maintaining alignment with these changes while also striving for greater development efficiency and agility.

Further challenges lie ahead:

  1. Developing new stream processing flows is a formidable task.
  2. Code exhibits high coupling to particular flows or event types.
  3. There is no opportunity for code reuse or sharing.
  4. Debugging, troubleshooting, and rectifying issues pose ongoing challenges.
  5. Managing code evolution remains a persistent concern.

The shortcomings of current solutions are as follows:

  1. They impose the use of SQL or other vendor-specific, lock-in languages on developers.
  2. They lack support for custom logic.
  3. They add complexity to the infrastructure, particularly as operations scale.
  4. They do not facilitate code reusability or sharing.
  5. Ultimately, they demand a significant amount of time to construct a real-time application or pipeline.

Introducing Memphis Functions

The Memphis platform is composed of four independent components:

1. Memphis Broker, serving as the storage layer.
2. Schemaverse, responsible for schema management.
3. Memphis Functions, designed for serverless stream processing.
4. Memphis Connectors, facilitating data retrieval and delivery through pre-built connectors.

Memphis Functions empower developers and data engineers with the ability to seamlessly process, transform, and enrich incoming events in real-time through a serverless paradigm, all within the familiar AWS Lambda syntax.

This means they can achieve these operations without being burdened by boilerplate code, intricate orchestration, error-handling complexities, or the need to manage underlying infrastructure.

Memphis Functions provides this versatility in an array of programming languages, including but not limited to Go, Python, JavaScript, .NET, Java, and SQL. This flexibility ensures that development teams have the freedom to select the language best suited to their specific needs, making the event processing experience more accessible and efficient.

What’s more?

In addition to orchestrating various functions, Memphis Functions offer a comprehensive suite for the end-to-end management and observability of these functions. This suite encompasses features such as a robust retry mechanism, dynamic auto-scaling utilizing both Kubernetes-based and established public cloud serverless technologies, extensive monitoring capabilities, dead-letter handling, efficient buffering, distributed security measures, and customizable notifications.

It’s important to note that Memphis Functions are designed to seamlessly complement existing streaming platforms, such as Kafka, without imposing the necessity of adopting the Memphis broker. This flexibility allows organizations to leverage Memphis Functions while maintaining compatibility with their current infrastructures and preferences.

Getting started

Step 1: Write your processing function
Utilize the same syntax as you would when crafting a function for AWS Lambda, taking advantage of the familiar and powerful AWS Lambda framework. This approach ensures that you can tap into AWS Lambda’s extensive ecosystem and development resources, making your serverless function creation a seamless and efficient process and without learning yet another framework syntax.

Functions can be a simple string-to-JSON conversion all the way to pushing a webhook based on some event’s payload.

Step 2: Connect Memphis to your git repository
Integrating Memphis with your git repository is the next crucial step. By doing so, Memphis establishes an automated link to your codebase, effortlessly fetching the functions you’ve developed. These functions are then conveniently showcased within the Memphis Dashboard, streamlining the entire process of managing and monitoring your serverless workflows. This seamless connection simplifies collaboration, version control, and overall visibility into your stream processing application development.

Step 3: Attach functions to streams
Now it’s time to integrate your functions with the streams. By attaching your developed functions to the streams, you establish a dynamic pathway for ingested events. These events will seamlessly traverse through the connected functions, undergoing processing as specified in your serverless workflow. This crucial step ensures that the events are handled efficiently, allowing you to unleash the full potential of your processing application with agility and scalability.

Gain early access and sign up to our Private Beta Functions waiting list here!