Chris Padilla/Blog

My passion project! Posts spanning music, art, software, books, and more
You can follow by RSS! (What's RSS?) Full archive here.

    New Album โ€” SkyBox ๐Ÿซง๐Ÿ 

    ๐ŸŒŽ

    Happy holidays!! New music is out today!

    โ‹†.หš๐Ÿ โ€งโ‚Š Frutiger Aero inspired music! หšโœฉ โ‚Šหš๐ŸซงโŠนโ™ก

    Late 2000s tech had a unique vibrancy that made it feel almost magical. A new wave of color and light, glowing portals that illuminated worlds imagined and real. It was truly alive, how you interacted with it was so novel. More intuitive than ever before.

    The anticipation for this promised future was at it's strongest as you began to load up that new system you brought home. Powering it on. Walking through the startup process. Being greeted to something wholly new...

    Details here!

    Thanks for listening! May it feel just like opening up a Nintendo Wii for the holidays.


    Derek Sivers and Websites

    A surprisingly technopositive view from Derek Sivers on AI learning from your own public content:

    Come and get me. I want my words to improve your future decisions.

    Iโ€™m trying to thoroughly write my thought processes, values, and worldview, so that you can remember it, re-create it and improve upon it. (Remember me. Re-create me. Improve upon me.)

    Another case for how a website can become a life's work. And how that life's work can have impact, even if it's obfuscated. Perhaps that's the ego-less way of viewing it, assuming your words and images will be taken out of context and converted to a single data point in a larger model.

    Though, a site unto itself is still more inspiring since it's a portrait of a life. We're wired for that. There's more meaning to derive from an individual than a summarized report.

    So I think I'll keep at it.


    Getting Started with LangGraph

    LangChain is emerging as a popular choice for Reactive AI applications. However, when you need a higher degree of control and flexibility in a project, LangGraph offers exactly that. All the while, still providing guide rails and tooling for quick iteration and development.

    Below, I'll share the absolute essentials needed to get started with LangGraph! With this toy app, we'll cover all the major concepts for developing a graph. Here, I'll do so with a Joke Telling AI application. While it's a simple app, this should demonstrate a foundation for developing your own RAG applications.

    Setting Annotations

    LangGraph is really a state machine at the end of the day. To get started, you'll want to define the state that will persist and change through your graph. These defenitions are referred to as Annotations in LangGraph.

    Below, I'm creating an Annotation with two pieces of state: messages and selectedModel. I want to be able to add and keep track of messages. Additionally, I want to be able to select which model to invoke.

    import {Annotation} from "@langchain/langgraph";
    import {BaseMessage, HumanMessage, SystemMessage} from "@langchain/core/messages";
    
    export const GraphAnnotation = Annotation.Root({
        messages: Annotation<BaseMessage[]>({
            reducer: (current, update) => current.concat(update as BaseMessage[]),
            default: () => [],
        }),
        selectedModel: Annotation<string>({
            reducer: (current, update) => update,
            default: () => "",
        }),
    });

    Defining the Workflow

    Once you have defined your Annotation, you can then outline the flow of your graph. Graphs are composed of two elements: Nodes and Edges. A Node is a function that will run. An Edge is the direction taken following a Node's completion.

    Additionally, we can define Conditional Edges. These are steps in the graph that will assess which Node to access next.

    Before getting into the details, let's outline a simple graph:

    import {StateGraph} from "@langchain/langgraph";
    
    const workflow = new StateGraph(GraphAnnotation)
        .addNode("OpenAI", callOpenAI)
        .addNode("Anthropic", callAnthropic)
        .addConditionalEdges("__start__", selectModel)
        .addEdge("OpenAI", "__end__")
        .addEdge("Anthropic", "__end__");

    My graph here defines two Nodes, each invoking a 3rd party LLM. Below that, I'm defining a Conditional Edge. And below that, I'm adding simple Edges to the end of the application.

    Creating the Nodes

    Nodes are simply functions that are called. Their expected output is the state we want to change in the graph. For example, when calling a model, I want the AI response to be added to my array of messages. Here's what both of those Nodes will look like:

    import {ChatOpenAI} from "@langchain/openai";
    import {ChatAnthropic} from "@langchain/anthropic";
    
    const callOpenAI = async (state: typeof GraphAnnotation.State) => {
        const model = new ChatOpenAI({temperature: 0});
    
        const messages = state.messages;
        messages.splice(messages.length - 2, 0, new SystemMessage(prompt));
        const response = await model.invoke(messages);
    
        return {messages: [response]};
    };
    
    const callAnthropic = async (state: typeof GraphAnnotation.State) => {
        const model = new ChatAnthropic({temperature: 0});
    
        const messages = state.messages;
        messages.splice(messages.length - 2, 0, new SystemMessage(prompt));
        const response = await model.invoke(messages);
    
        return {messages: [response]};
    };

    Notice that I'm adding a SystemMessage before invoking each model. This is where I can provide my prompt:

    const prompt = "You are a hilarious comedian! When prompted, tell a joke.";

    Routing With the Conditional Edge

    Earlier we defined in our Annotation a selectedModel state. In our Conditional Edge, we'll make use of it to route to the preffered model:

    const selectModel = async (state: {selectedModel: string}) => {
        if (state.selectedModel === "OpenAI") {
            return "OpenAI";
        }
    
        return "Anthropic";
    };

    Note that I'm returning the name of the Node that I'd like the graph to traverse to next.

    Persistence

    Persistence is a larger topic in LangGraph. For today, we'll be making use of the in-memory saver. Know, here, that you can use your own plugin for strategies that utilize SQL Databases, MongoDB, Redis, or any custom solution:

    import {MemorySaver} from "@langchain/langgraph";
    
    const checkpointer = new MemorySaver();

    Calling the Graph

    With all of this set, we're ready to use the graph!

    Below, I'll compile the graph with the checkpointer I created above. Once I've done that, I'll create a config object (the thread_id is a unique identifier for a conversation had with the user and graph. It's hardcoded here for simplicity.) With both of these, I'll invoke the graph, passing the initial state as well as my config object.

    import {RunnableConfig} from "@langchain/core/runnables";
    
    
    export const compiledGraph = workflow.compile({checkpointer});
    
    const runGraph = async () => {
        const config = {configurable: {thread_id: "123"}} as RunnableConfig;
        const {messages} = await compiledGraph.invoke(
                // Initial updates to State
                {selectedModel: "OpenAI", messages: [new HumanMessage("Tell me a joke!")]},
                // RunnableConfig
                config,
        );
        console.log(messages[messages.length - 1].content);
    };
    
    runGraph();
    
    // Logs the following:
    // Why couldn't the bicycle stand up by itself?
    //
    // Because it was two tired!

    There you have it! With that, you're off and away on developing with AI! ๐Ÿšด


    A Little Tear

    Listen on Youtube

    The Sarah Vaughan recording of this is absolute magic, my goodness.


    Snow Trek Home

    โ˜ƒ๏ธ๐Ÿ ๐Ÿ”๏ธ

    โ„๏ธ