RedStack https://redstack.dev Microservices, Cloud Native and AI Tue, 12 May 2026 13:32:57 +0000 en-US hourly 1 https://s0.wp.com/i/webclip.png RedStack https://redstack.dev 32 32 4033274 What Is a Graph in Oracle AI Database 26ai? https://redstack.dev/2026/05/13/what-is-a-graph-in-oracle-ai-database-26ai/ https://redstack.dev/2026/05/13/what-is-a-graph-in-oracle-ai-database-26ai/#respond <![CDATA[Mark Nelson]]> Wed, 13 May 2026 13:31:07 +0000 <![CDATA[Uncategorized]]> <![CDATA[graph]]> <![CDATA[oracle]]> <![CDATA[sql]]> https://redstack.dev/?p=4365 <![CDATA[If you have ever asked, “How is this thing connected to that thing?”, you have already asked a graph question. A customer sends money to an account. That account sends money to two other accounts. One of those accounts sends … Continue reading ]]> <![CDATA[

If you have ever asked, “How is this thing connected to that thing?”, you have already asked a graph question.

A customer sends money to an account. That account sends money to two other accounts. One of those accounts sends money back to the original customer. In ordinary SQL, you can query each transfer. But when the real question is about chains, loops, hubs, and hidden intermediaries, the shape of the question becomes graph-shaped.

Oracle AI Database 26ai makes that graph shape part of the database instead of a separate side project. You can model a property graph over existing relational tables, query it with SQL graph syntax, join the results back to relational data, and then move into Graph Studio or PGX when you need visualization or heavier analytics.

Did you know that graphs were added to the SQL standard? Read about it here.

This series starts from zero. You do not need graph theory. You do not need a separate graph database. You need a basic comfort with SQL and a practical problem where relationships matter.

There are two ways you can try the runnable examples in this series. The first path is to use FreeSQL and this will work just fine for most of the examples: creating the tables, creating the graph, and querying patterns with GRAPH_TABLE. The second path is your own Autonomous Database Serverless instance on OCI for the full analytics chapter, because the DBMS_OGA algorithm examples require packages that are not available in the FreeSQL environment.

By the end of this article, you should be able to explain five ideas well enough to understand the graph DDL in the next article: vertex, edge, label, property, and directed relationship. You should also understand why Oracle’s 26ai graph approach is accessible from ordinary SQL instead of starting in a separate graph-only tool.

That is the only conceptual load for now. We will save weighted paths, algorithms, PGX, and natural-language querying for later articles, when those ideas have a working graph underneath them.

The Graph Mental Model

A graph has two main parts: vertices and edges.

A vertex is a thing. In a banking example, a vertex might be an account. In a social network, it might be a person. In a supply chain, it might be a warehouse, supplier, shipment, or part.

An edge is a relationship between things. In the banking example, a transfer from account 101 to account 102 is an edge. The edge has direction because money moved from one account to another.

Both vertices and edges can have properties. An account vertex can have an account id, customer id, account name, and balance. A transfer edge can have an amount, timestamp, channel, and note.

Labels give those things names in the graph model. In this series, account vertices use the label account, and transfer edges use the label transfer. Labels make graph patterns readable because you can ask for accounts connected by transfers instead of thinking only in table and column names.

That gives us a simple model:

  • account is a vertex;
  • transfer is an edge;
  • transfer direction goes from source account to destination account;
  • transfer amount and timestamp are edge properties;
  • customer id and balance are vertex properties.

The point is not to replace tables. The point is to describe relationships over data you already have.

That distinction helps avoid a common beginner mistake. You are not deciding whether the account data is “relational” or “graph.” It is relational data that also has a graph interpretation. Oracle lets you keep both views of the same facts.

Why Graph Questions Feel Different

Relational databases are excellent at storing and querying structured facts. A transfer table can tell you that account 101 sent 500 dollars to account 102. A customer table can tell you that account 102 belongs to a medium-risk customer.

Graph questions start when the relationship pattern matters.

For example:

  • Which accounts receive money from many different sources?
  • Which accounts sit in the middle of two-hop transfer chains?
  • Which accounts participate in round-trip transfer cycles?
  • Which high-risk customers are connected to suspicious paths?

You can answer some of these with ordinary joins, especially when the path length is fixed and short. But the queries become harder to read as soon as you want to ask about paths, cycles, or repeated relationship patterns.

That is where GRAPH_TABLE becomes useful. It lets you describe a graph pattern and return the matches as rows. Once the graph match is back in row form, you can use normal SQL again: filter it, aggregate it, join it, and sort it.

What Changed In 26ai

In Oracle AI Database 26ai, SQL property graphs are native database objects. You create them with CREATE PROPERTY GRAPH, and you query them with SQL graph syntax such as GRAPH_TABLE.

The important beginner idea is this: a SQL property graph is metadata over database objects. You do not have to copy all of your rows into a separate graph store just to start asking graph questions. The graph definition says which tables provide vertices, which tables provide edges, how the keys connect, which labels to use, and which properties to expose.

For the bank example in this series, the relational data is still stored in ordinary tables:

  • customers
  • bank_accounts
  • bank_transfers

The graph object simply gives those tables graph meaning:

  • bank_accounts becomes account vertices;
  • bank_transfers becomes transfer edges;
  • src_account_id and dst_account_id define edge direction.

That is why graph in 26ai is a good fit for developers and DBAs who already work with Oracle Database. You can start with SQL, keep the data where it is, and add graph-shaped queries where they help.

Why Not RDF?

Oracle supports RDF graphs too, but this series is about property graphs. RDF is the better fit when the work centers on formal semantics, ontologies, inferencing, and standards-based knowledge representation. Property graphs are the better starting point here because the question is operational and concrete: how are these accounts connected, and what suspicious patterns do those connections form?

The Bank Fraud Story We Will Use

The rest of this series uses a small bank-fraud style example. It is intentionally tiny so you can understand every row.

The demo has customers, accounts, and transfers. Some transfers form a simple cycle. Other transfers create a fan-in pattern where several accounts send money to the same destination. Another chain places a high-risk customer in the middle.

That gives us enough data to teach useful graph ideas without hiding the lesson inside a huge dataset.

Here is the shape of the data we will use throughout this series:

customers bank_accounts bank_transfers
---------- ------------- --------------
customer_id -> customer_id transfer_id
risk_tier account_id -> src_account_id
account_name -> dst_account_id
balance amount
transfer_ts
channel

The SQL property graph gives those tables this connected shape:

(account)-[transfer]->(account)

That one pattern, account connected to account by transfer, is enough to teach all four articles’ worth of graph concepts: paths, cycles, hubs, chains, ranking, connected groups, and hybrid SQL joins.

Where This Series Is Going

The next article builds the graph. We will create the tables, load seed data, define BANK_GRAPH, and inspect the graph metadata.

After that, we will query fraud patterns with GRAPH_TABLE. Then, in ADB-S, we will add in-database algorithms with DBMS_OGA and use Graph Studio to visualize the same graph.

The goal is not to memorize every graph feature. The goal is to build a practical mental model:

  1. start with relational data;
  2. define a SQL property graph;
  3. query graph patterns with SQL;
  4. join graph results to ordinary data;
  5. move into Graph Studio or PGX when you need visualization or deeper analytics.

That is the zero-to-hero path.

Try The Demo

The fastest way to make the ideas concrete is to run the tiny bank demo yourself. The first runnable step creates three ordinary relational tables:

  • customers
  • bank_accounts
  • bank_transfers

Those tables hold the facts. The next step creates BANK_GRAPH, a SQL property graph over the account and transfer rows. Nothing is copied into a separate graph store; the graph definition gives the existing rows a connected shape.

Note: this will ask you to login with your oracle.com account since it writes to the database, not just reads. It’s totally free. I recommend you read through the SQL first to understand what it does, then click on the “Run Script” button to execute it, and then scroll through the output to see what happened. Feel free to play around with it and change it however you like!

In the next article, you’ll load the seed data and create the graph. If you prefer to use your own Autonomous Database Serverless instance, copy the same SQL into your SQL Worksheet and run it there. Either way, keep the data small at first. The goal is to see the graph model clearly before adding larger datasets, algorithms, or visualization.

Once BANK_GRAPH exists, the later examples will use the same graph to find inbound hubs, transfer chains, cycles, PageRank scores, connected groups, and weighted paths.

]]>
https://redstack.dev/2026/05/13/what-is-a-graph-in-oracle-ai-database-26ai/feed/ 0 4365
Build a stateless RAG chatbot with Spring AI and Oracle (part 1 of 4) https://redstack.dev/2026/05/12/4355/ https://redstack.dev/2026/05/12/4355/#respond <![CDATA[Mark Nelson]]> Tue, 12 May 2026 13:09:35 +0000 <![CDATA[Uncategorized]]> <![CDATA[apring-ai]]> <![CDATA[memory]]> <![CDATA[oracle]]> <![CDATA[Spring]]> <![CDATA[vector-search]]> https://redstack.dev/?p=4355 <![CDATA[Hi everyone! I just posted the first video in this series about building AI assistance with memory and agency. In this video, we built a simple customer support assistant using Spring Boot, Spring AI, and Oracle Database. It answers policy … Continue reading ]]> <![CDATA[

Hi everyone!

I just posted the first video in this series about building AI assistance with memory and agency. In this video, we built a simple customer support assistant using Spring Boot, Spring AI, and Oracle Database. It answers policy questions, but the key idea is that those answers are grounded in data stored in Oracle rather than coming from the model itself.

In this post I want to walk through how the application is actually put together, focusing on the pieces that make that retrieval flow work.

The full project is here: https://github.com/markxnelson/shopassist

If you clone the repo and open it up, it looks like a normal Spring Boot application. The difference is how Oracle is used underneath, and how that gets wired into the chat flow.

A good place to start is with how the database is brought up locally.

In compose.yaml, the application relies on Spring Boot’s Docker Compose support to start Oracle automatically:

services:
  oracle:
    image: gvenzl/oracle-free:23.26.1-slim
    ports:
      - "1521"
    environment:
      ORACLE_PASSWORD: oracle
      APP_USER: shopassist
      APP_USER_PASSWORD: shopassist

There’s nothing unusual here, but it’s worth noting that this is just a standard Oracle instance. There’s no separate “vector database” running anywhere. The same database is going to store both relational data and vector embeddings.

Once that container is running, the next piece is how the application connects to it.

In application.yml, you can see both the datasource configuration and the vector store configuration side by side:

spring:
  datasource:
    url: jdbc:oracle:thin:@localhost:${oracle.port}/FREEPDB1
    username: shopassist
    password: shopassist
  ai:
    vectorstore:
      oracle:
        initialize-schema: true
        index-type: NONE
        distance-type: COSINE

This is where Oracle AI Vector Search becomes visible in the application.

The initialize-schema flag tells Spring AI to create the underlying tables needed to store embeddings. Once that happens, the database is no longer just storing rows – it’s storing vectors that can be searched using similarity.

The distance-type: COSINE setting controls how similarity is calculated, which is what allows us to retrieve “related” documents even when the wording is different.

With that in place, the next question is how data actually gets into the vector store.

That happens in DataSeeder.java, and this is one of the more important pieces of the application because it defines what the system actually knows.

If you look at the seeder, you’ll see the policy documents being added like this:

vectorStore.add(List.of(
    new Document("""
        POL-RETURN-01:
        Returns are accepted within 30 days of purchase with a valid receipt.
        Refunds are issued to the original payment method.
        """),
    new Document("""
        POL-DAMAGE-01:
        Damaged items must be reported within 48 hours of delivery.
        Customers may request a replacement or refund.
        """),
    new Document("""
        POL-SHIPPING-01:
        Shipping delays may occur due to weather or carrier issues.
        Customers will be notified of significant delays.
        """),
    new Document("""
        POL-REFUND-01:
        Refunds are typically processed within 5-7 business days after approval.
        Processing times may vary by payment provider.
        """)
));

Each of these documents is embedded and stored in Oracle. That’s what allows us to later take a user’s question, convert it into a vector, and find the most similar documents using Oracle AI Vector Search.

The same class also seeds some relational order data, which we expose through /api/v1/orders. That isn’t used by the chatbot yet, but it becomes important later when we start combining retrieval with structured queries.

Once the data is in place, the next piece is how a request flows through the system.

If you open AgentController.java, the endpoint itself is intentionally simple:

@RestController
@RequestMapping("/api/v1/agent")
public class AgentController {
    private final ChatClient chatClient;
    public AgentController(ChatClient chatClient) {
        this.chatClient = chatClient;
    }
@PostMapping("/chat")
    public String chat(@RequestBody ChatRequest request) {
        return chatClient.prompt(request.message()).call().content();
    }
}

On its own, this just looks like a thin wrapper around an LLM call. There’s no obvious retrieval happening here.

That’s because the retrieval is introduced through how the ChatClient is configured:

QuestionAnswerAdvisor.builder(vectorStore)
    .searchRequest(SearchRequest.builder().topK(3).build())
    .build();

When chatClient.prompt(...).call() is invoked, this advisor intercepts the request, runs a similarity search against Oracle, and retrieves the top three matching documents.

Those documents are then injected into the prompt that gets sent to the model.

So when a user asks something like “How long do refunds take?”, the system first retrieves the refund policy from Oracle and then asks the model to generate an answer based on that context.

One small addition that makes this much easier to see is the debug endpoint. Instead of going through the full chat flow, this endpoint lets you query the vector store directly and inspect what Oracle returns.

The controller for that looks like this:

@RestController
@RequestMapping("/api/v1/debug/policies")
public class PolicyDebugController {
    private final VectorStore vectorStore;
    public PolicyDebugController(VectorStore vectorStore) {
        this.vectorStore = vectorStore;
    }
    @GetMapping("/search")
    public List<Document> search(@RequestParam String q) {
        return vectorStore.similaritySearch(
            SearchRequest.builder()
                .query(q)
                .topK(3)
                .build()
        );
    }
}

There’s nothing complicated here, but it’s doing something important. Instead of calling the model, it calls vectorStore.similaritySearch(...) directly using the same search configuration.

So when you hit:

GET /api/v1/debug/policies/search?q=refund

you’re seeing exactly what Oracle returns for that query. The same embeddings, the same cosine similarity, the same top-K logic—just without the LLM step.

This makes it much easier to reason about the system. If the wrong documents show up here, the model never had a chance to produce the right answer. If the right documents show up here but the answer is still off, then you know to look at prompt construction instead.

At this point, the application is doing something useful, but it’s also very clearly limited because it is completely stateless.

If you try something like:

My name is Maya.

and then follow it with:

What’s my name?

the system won’t be able to answer. Each request is handled independently, and the only context available is the current message plus whatever documents were retrieved from Oracle.

There is no conversation history, and nothing is stored between requests.

This is the key distinction to understand at this stage. Oracle AI Vector Search gives us a way to retrieve relevant knowledge from stored data, but it does not give us memory. The system can answer policy questions because those policies are embedded and searchable, but it cannot remember something you just told it.

That limitation is intentional, because it sets up the next step. In the next episode we’ll introduce persistent chat memory, add conversation identifiers, and start maintaining context across requests.

For now, this version gives us a clean, grounded foundation: a Spring Boot service where Oracle stores embeddings, performs similarity search, and provides the context that the model uses to generate responses. Everything else we add will build on top of that.

]]>
https://redstack.dev/2026/05/12/4355/feed/ 0 4355
Exploring Oracle’s new AI Agent Memory Python Library with OpenAI https://redstack.dev/2026/05/06/exploring-oracles-new-ai-agent-memory-python-library-with-openai/ https://redstack.dev/2026/05/06/exploring-oracles-new-ai-agent-memory-python-library-with-openai/#respond <![CDATA[Mark Nelson]]> Wed, 06 May 2026 18:59:27 +0000 <![CDATA[Uncategorized]]> <![CDATA[agent]]> <![CDATA[ai]]> <![CDATA[graph]]> <![CDATA[JSON]]> <![CDATA[memory]]> <![CDATA[opanai]]> <![CDATA[oracle]]> <![CDATA[python]]> <![CDATA[vector]]> https://redstack.dev/?p=4331 <![CDATA[Hi everyone! In this post, I want to show you a small but useful demo application that uses Oracle AI Agent Memory from Python. The complete code for this example is in the agent-memory repository. You can learn more about … Continue reading ]]> <![CDATA[

Hi everyone!

In this post, I want to show you a small but useful demo application that uses Oracle AI Agent Memory from Python. The complete code for this example is in the agent-memory repository. You can learn more about Oracle AI Agent Memory on the Oracle website.

The demo is a customer support assistant. That is a nice shape for an agent memory example because it gives us all the things agents usually need to remember: who the user is, what happened before, which device or account is involved, what the current case state is, and whether this new problem sounds like an earlier one.

The important point is that oracleagentmemory is the memory layer. It is not tied to one agent framework. You can use it with different frameworks and SDKs. This particular sample uses the OpenAI SDK for the agent-style tool-calling loop, and it uses Oracle AI Agent Memory as the durable memory backend.

In other words, OpenAI drives the agent turn. Oracle stores and retrieves the memory.

Let’s walk through it.

What we are building

The sample application does five things:

  • starts a local Oracle Database Free container
  • creates the Oracle AI Agent Memory managed schema
  • creates a small companion schema for customer, device, case, policy, JSON state, and graph data
  • runs a scripted support conversation with memory-aware tool calls
  • prints a database inspection report so we can see where the memory went

The scenario centers on Alex, a support user with a River House account and a Model X router. Alex had a prior Wi-Fi dropout issue. Later, Alex comes back and says video calls are unstable again. The agent needs to figure out whether that sounds related, what facts it already knows, what relationships matter, and what to do next.

That gives us a realistic demo without needing a real ticketing system, CRM, or router telemetry feed.

Before you begin

You need Python, Docker, uv, and an OpenAI API key.

The quickstart from the repository is:

cp .env.example .env
# Edit .env and set OPENAI_API_KEY.
uv sync
uv run agent-memory-demo run

The demo uses gvenzl/oracle-free:23.26.1-slim-faststart by default. That is helpful for a local demo because the database starts faster than a normal first-start image.

The repository also sets these defaults:

  • OPENAI_MODEL=gpt-5-mini
  • OPENAI_EMBEDDING_MODEL=text-embedding-3-small
  • OPENAI_MEMORY_LLM_MODEL=gpt-5-mini
  • ORACLE_MEMORY_TABLE_PREFIX=OAM_DEMO_
  • ORACLE_APP_TABLE_PREFIX=OAM_DEMO_APP_

The two prefixes matter. The OAM_DEMO_ tables are managed by Oracle AI Agent Memory. The OAM_DEMO_APP_ tables are the companion business tables created by this sample application.

That separation makes the demo easier to understand. We can see what the library owns, and we can also see the normal application data that the agent works with.

Why agent memory is not just one thing

When people first talk about agent memory, they often mean one thing: chat history. That is useful, but it is not enough.

A useful agent may need several kinds of memory:

Memory kindWhat it means in this demoOracle storage usedWhen to use it
Working or thread memoryThe current and previous support messagesRelational rows in managed Agent Memory tablesUse this when the agent needs conversation continuity.
Durable fact memoryPreferences, facts, and case summaries that should survive the current conversationManaged Agent Memory records, plus vector chunks for retrievalUse this when the agent should remember something later.
Profile memoryUser and agent profilesRelational rowsUse this for stable actor information such as user preferences or agent identity.
State memoryThe mutable status of a support caseJSON in the app-owned case tableUse this when the shape of the state may evolve over time.
Relationship memoryUser to account to device to case to policy pathsSQL Property Graph over relational vertex and edge tablesUse this when the important question is about connected things.
Similarity memoryPrior cases or memories that are semantically close to the current issueOracle VECTOR columns in record chunksUse this when the same thing may be described in different words.

That is the main architectural idea in the demo. Different memory types have different access patterns, so they should not all be forced into the same shape.

Relational data is great when identifiers, constraints, ownership, and joins matter. JSON is great when the shape of state changes as the case moves forward. Graph is great when paths and relationships are the point. Vector data is great when similarity matters more than exact matching.

The nice thing here is that all of those can live in Oracle Database. The agent does not need a separate relational database, graph database, document database, and vector database just to remember one support case.

The repository structure

The application code lives under src/agent_memory_demo.

The key files are:

FileWhat to look for
cli.pyThe Typer commands: run, interactive, inspect-db, verify-memory, and reset-db.
container.pyThe local Oracle container lifecycle.
config.pyEnvironment variable loading and defaults.
memory.pyCreation of the OracleAgentMemory client.
agent.pyThe OpenAI tool-calling loop and tool schemas.
tools.pyTool handlers for memory search, saving memory, context, JSON state, graph paths, and inspection.
schema.pyThe app-owned relational, JSON, and graph schema.
seed.pyDeterministic demo data.
inspect.pyDatabase inspection output, including vector storage evidence.

The sample is intentionally small, but it is not a toy in the sense of hiding the database. It shows the database because that is the point of the demo.

Creating the memory client

The memory setup happens in memory.py.

The demo creates an OracleAgentMemory client with:

  • an Oracle database connection
  • an embedding model
  • an LLM model for memory extraction and summaries
  • a schema policy
  • a table name prefix

The schema policy is important. Normal startup uses SchemaPolicy.CREATE_IF_NECESSARY, so the managed Agent Memory schema is created if it is not already there. The reset-db command uses the recreate policy for an explicit destructive reset.

That gives the sample a clean local developer workflow. You can run it, inspect it, reset it, and run it again without needing a manually installed database.

The agent loop

The OpenAI side of the sample lives in agent.py.

The agent loop sends a user message, instructions, and a list of function tools to the OpenAI SDK. When the model returns tool calls, the application executes the local Python handler, sends the tool output back, and repeats until the model produces final text.

The useful part is that the tools map to different memory operations:

ToolWhat it demonstrates
search_memoryScoped Agent Memory search.
save_memoryExplicit durable memory writes.
get_contextA thread context card.
update_case_stateJSON state updates in the support case table.
find_related_caseVector-backed semantic retrieval of similar case memories.
explain_relationshipsSQL Property Graph traversal.
inspect_memory_tablesDatabase evidence for the demo.

This is a useful pattern for agent applications. The model does not get direct database access. It gets tools. Each tool has a focused job, a scoped input shape, and a handler that decides what database operation is safe and appropriate.

The companion schema

The sample creates app-owned tables for customers, devices, support cases, policies, graph vertices, and graph edges.

This is separate from the managed Agent Memory schema. That is a good design choice because most real applications already have business data. Agent memory should not replace that data. It should work with it.

The support case table uses JSON for mutable state. A case might start as open, then get a next action, then become escalation-ready, then later get a resolution. That kind of state is structured, but it can change over time. JSON is a good fit.

The graph tables show relationships. Alex owns an account. The account has a router. The router has a case. The case uses a policy. That is exactly the kind of question where graph traversal is easier to read than a pile of joins.

The managed Agent Memory tables store threads, messages, memory records, actor profiles, and record chunks. The record chunks are where the vector-backed similarity story becomes visible.

Run the scripted demo

Start with the main command:

uv run agent-memory-demo run

The run command starts the Oracle container, creates demo schema objects, seeds deterministic data, runs a scripted support conversation, shows memory tool usage, prints the final assistant answer, and includes a database inspection report before the container is removed.

There are a few things to watch for in the output.

First, the demo creates both user and agent profiles. That shows profile memory, not just chat memory.

Second, it creates an initial thread and stores messages. That gives the agent working memory and a durable record of the conversation.

Third, it saves explicit memory. The demo records facts like Alex’s router and contact preferences.

Fourth, it creates a second thread for a follow-up problem and shows the difference between broad thread matching and exact thread matching. That is a subtle but important behavior. Sometimes you want memories from the same user and agent across threads. Sometimes you only want the current thread.

Fifth, it shows scope isolation. A search scoped to another user should not see Alex’s memories.

Finally, the OpenAI tool calls are printed with their JSON arguments and compact results. That makes the agent loop much easier to reason about because you can see what the model asked for and what the database returned.

The output is color-coded:

  • green for user messages
  • magenta for assistant messages
  • yellow for OpenAI tool calls and arguments
  • blue for Oracle database, graph, and tool-result evidence
  • cyan for progress and memory setup or search visibility

That may sound like a small thing, but it makes the demo much easier to follow while it runs.

Run the interactive demo

The repository also includes an interactive mode:

uv run agent-memory-demo interactive

This starts a memory-enabled assistant using the same command-scoped Oracle container lifecycle. It seeds the same companion data and stores your turns in a scoped Agent Memory thread.

One practical detail: the container is command-scoped. It exists while the command is running and is removed when the command exits. So if you want to inspect the database manually, leave the interactive session open.

The README says to wait for output like this:

Started Oracle demo database at localhost:32838/FREEPDB1
Seeded companion relational, JSON, graph, and policy data.
Interactive Oracle AI Agent Memory demo. Type 'quit' to exit.
you>:

Then, before inspecting the database, ask a prompt that creates some memory activity:

For user_id=user_alex and agent_id=support_agent, inspect memory tables and tell me what you can see in one sentence.

Now keep that terminal open and inspect the database from another terminal.

Inspect the database

There is a command for a quick database evidence report:

uv run agent-memory-demo inspect-db

This starts a temporary Oracle container, seeds the deterministic companion data, prints table counts, JSON case state, and graph paths, and then tears the container down.

One thing to know: inspect-db does not create Agent Memory records. That means managed memory table counts are expected to be zero for that command. Use run or verify-memory when you want to populate and inspect memory and vector chunk tables.

For hands-on SQL inspection, keep the interactive session open and connect from a second terminal. Replace the port with the one printed by your run:

sql agent_memory_demo/AgentMemoryDemo1@localhost:32838/FREEPDB1

A good first query is to list the demo tables:

SELECT table_name
FROM user_tables
WHERE table_name LIKE 'OAM_DEMO%'
ORDER BY table_name;

Then look at the columns and data types:

SELECT table_name, column_name, data_type
FROM user_tab_columns
WHERE table_name LIKE 'OAM_DEMO%'
ORDER BY table_name, column_id;

That is where the storage story becomes concrete. You should see the normal relational columns, the JSON columns in the app-owned tables, and, after running a memory-populating command, the vector-related storage in the managed record chunk table.

The graph edge indexes are also useful to inspect:

SELECT index_name, table_name, column_name
FROM user_ind_columns
WHERE index_name LIKE 'OAM_DEMO_APP_GRAPH_EDGE%'
ORDER BY index_name, column_position;

To see the JSON case state, run:

SELECT case_id,
title,
json_value(state_json, '$.status') AS status,
json_value(state_json, '$.next_action') AS next_action,
json_value(state_json, '$.escalation_ready') AS escalation_ready
FROM OAM_DEMO_APP_CASE
ORDER BY case_id;

This is a good example of why JSON is useful here. The case state is still queryable from SQL, but the state document can evolve as the workflow evolves.

Now look at the graph tables:

SELECT vertex_id, vertex_type, label
FROM OAM_DEMO_APP_GRAPH_VERTEX
ORDER BY vertex_type, vertex_id;
SELECT source_vertex_id, relationship_type, target_vertex_id
FROM OAM_DEMO_APP_GRAPH_EDGE
ORDER BY edge_id;

And confirm that the property graph exists:

SELECT object_name, object_type
FROM user_objects
WHERE object_name = 'OAM_DEMO_APP_PROPERTY_GRAPH';

Finally, after running a memory-populating command, look at the managed Agent Memory tables:

SELECT *
FROM OAM_DEMO_MEMORY
FETCH FIRST 5 ROWS ONLY;
SELECT *
FROM OAM_DEMO_RECORD_CHUNKS
FETCH FIRST 5 ROWS ONLY;

This query shows the vector columns:

SELECT column_name, data_type
FROM user_tab_columns
WHERE table_name = 'OAM_DEMO_RECORD_CHUNKS'
AND data_type = 'VECTOR';

And this one shows vector indexes:

SELECT index_name, index_type
FROM user_indexes
WHERE table_name = 'OAM_DEMO_RECORD_CHUNKS'
ORDER BY index_name;

That is the part I like most in this demo. We are not just saying that memory is persistent. We can actually look at the tables and see how different kinds of memory are represented.

Verify graph and vector behavior

The verify-memory command is a nice acceptance check:

uv run agent-memory-demo verify-memory

It seeds explicit Agent Memory records, runs graph traversal, runs vector-backed similarity search, and prints metadata evidence for the managed OAM_DEMO_RECORD_CHUNKS table.

The expected similar prior router case is case_wifi_dropout_001.

That matters because the follow-up issue does not have to use the exact same words as the earlier issue. Vector search can connect “video calls freeze” with a prior router dropout case because the meaning is similar.

This is the right place to use vectors. You are not asking for the one row with a known primary key. You are asking, “Have we seen something like this before?”

When to use each Oracle data type

Here is the practical version.

Use relational tables for the things you must identify and constrain: users, accounts, devices, cases, policies, threads, messages, and profile rows. Relational data gives you keys, constraints, indexes, joins, and ownership boundaries. That is still the backbone of most useful applications.

Use JSON for flexible state. In this sample, support case state lives in JSON because the state can change as the workflow changes. The update_case_state tool uses JSON_MERGEPATCH to update the state document without replacing the whole application model.

Use graph for connected context. If the agent needs to understand that Alex owns an account, the account has a router, the router has a case, and the case uses a policy, graph traversal makes that relationship path explicit. In this sample, explain_relationships uses a SQL Property Graph query to return user-account-device-case-policy paths.

Use vectors for similarity. If the user describes the same issue with different words, exact search is not enough. Vector search lets the agent find semantically similar memories and prior cases. In this sample, case summary memories are embedded into record chunks, and the find_related_case tool searches those chunks through Oracle Agent Memory.

The real value is not that any one of these exists. The value is that the sample can use all of them together.

Reset the demo

If you want to exercise the destructive reset path, the repository includes this command:

uv run agent-memory-demo reset-db

That resets the app-owned companion schema and recreates the managed Agent Memory schema. It is a demo command, not something to point at a production schema.

What this sample teaches

There are a few patterns here that are worth carrying into real applications.

First, keep the memory backend separate from the agent framework. This demo uses the OpenAI SDK tool loop, but the memory concepts are not OpenAI-specific. The agent needs tools. The memory system needs scoped APIs. Those two things meet at a clean boundary.

Second, scope everything. The demo uses user, agent, and thread boundaries. It also shows that another user should not see Alex’s memories. That is not just a demo flourish. It is table stakes for real multi-user agents.

Third, use the right data shape for the job. Chat messages, durable memories, JSON state, graph relationships, and vector chunks are not the same thing. Treating them differently makes the system easier to reason about.

Fourth, inspect the database. Agent demos can feel magical if all you see is a final answer. This demo is better because it shows the rows, JSON state, graph paths, and vector storage evidence. That makes the behavior testable and explainable.

Wrap up

We built and inspected a memory-enabled support assistant using oracleagentmemory, the OpenAI SDK, and a local Oracle Database container.

The sample shows working memory through threads and messages, durable memory through explicit memories and extracted facts, profile memory for users and agents, JSON state for support cases, relationship memory through SQL Property Graph, and similarity memory through vector-backed record chunks.

The important idea is simple: agents need more than chat history. They need memory that is durable, scoped, queryable, and connected to the data the application already trusts.

This demo gives you a compact way to see that pattern end to end.

]]>
https://redstack.dev/2026/05/06/exploring-oracles-new-ai-agent-memory-python-library-with-openai/feed/ 0 4331
Building an Authorization Server with Spring Boot 4 and Oracle Database https://redstack.dev/2026/05/04/building-an-authorization-server-with-spring-boot-4-and-oracle-database/ https://redstack.dev/2026/05/04/building-an-authorization-server-with-spring-boot-4-and-oracle-database/#respond <![CDATA[Mark Nelson]]> Mon, 04 May 2026 13:59:00 +0000 <![CDATA[Uncategorized]]> <![CDATA[authorization]]> <![CDATA[oauth2]]> <![CDATA[oidc]]> <![CDATA[openid]]> <![CDATA[oracle]]> <![CDATA[Spring]]> <![CDATA[spring-boot]]> <![CDATA[spring-security]]> https://redstack.dev/?p=4317 <![CDATA[Hi again everyone! In this post I want to show you how to build a small authorization server using Spring Boot, Spring Security, Spring Authorization Server, and Oracle Database. The idea is simple: we want an application that can expose … Continue reading ]]> <![CDATA[

Hi again everyone!

In this post I want to show you how to build a small authorization server using Spring Boot, Spring Security, Spring Authorization Server, and Oracle Database. The idea is simple: we want an application that can expose OAuth2/OIDC authorization-server endpoints, authenticate users whose details are stored in Oracle Database, and provide a small REST API for managing those users.

The complete code for this example is in the azn-server repository. In this article we will build it from scratch and look at the important pieces along the way.

One important note before we start: this version of the example is on the Spring Boot 4.x code line. The repository currently uses Spring Boot 4.0.6, Java 21, Spring Framework 7.0.7, Spring Security 7.0.5, Spring Cloud 2025.1.1, Liquibase 5.0.2 from the Spring Boot BOM, and the Oracle Spring Boot starters.

If you have seen the Spring Boot 3.x version of this sample, the application shape is intentionally the same. The Boot 4 version updates the dependency line and uses the new modular starter names and package names introduced across Spring Boot 4, Spring Framework 7, and Spring Security 7.

What we are building

The application has three main responsibilities:

  • Expose OAuth2 and OpenID Connect endpoints using Spring Authorization Server.
  • Store application users in Oracle Database.
  • Provide a secured user-management API backed by Spring Security method security.

Spring Security gives us a lot here. It gives us the authentication framework, password encoding, UserDetailsService integration, filter chains, method-level authorization, role hierarchy support, and the authorization-server protocol endpoints. Oracle Database gives us a durable user repository, schema ownership, constraints, identity columns, auditing triggers, and a real database engine for integration tests.

This is the application shape:

  • Spring Boot starts the service.
  • Liquibase creates or updates the Oracle schema user.
  • Liquibase creates the USERS table and audit trigger.
  • JPA maps the USERS table into a User entity.
  • Spring Security loads users from that JPA repository.
  • Spring Authorization Server exposes the OAuth2/OIDC endpoints.
  • The REST API lets administrators manage users.

Let’s walk through the build.

Create the Spring Boot project

We start with a normal Spring Boot application. The important thing is to include the dependencies for web endpoints, Spring Authorization Server, actuator, JPA, Liquibase, Oracle UCP, Oracle wallet support, and the test stack.

The parent and version properties select the Spring Boot 4 and Spring Cloud lines:

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>4.0.6</version>
        <relativePath/>
    </parent>

    <properties>
        <java.version>21</java.version>
        <spring-cloud.version>2025.1.1</spring-cloud.version>
        <oracle-spring-boot-starter-version>26.1.1</oracle-spring-boot-starter-version>
    </properties>

Here is the dependency section from pom.xml:

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-webmvc</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-security-oauth2-authorization-server</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-liquibase</artifactId>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.spring</groupId>
            <artifactId>oracle-spring-boot-starter-ucp</artifactId>
            <version>${oracle-spring-boot-starter-version}</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.spring</groupId>
            <artifactId>oracle-spring-boot-starter-wallet</artifactId>
            <version>${oracle-spring-boot-starter-version}</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-prometheus</artifactId>
        </dependency>

        <!-- test dependencies -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-webmvc-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-security-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-security-oauth2-authorization-server-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-testcontainers</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>testcontainers-junit-jupiter</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>testcontainers-oracle-free</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

The Spring Cloud BOM is imported separately:

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-dependencies</artifactId>
                <version>${spring-cloud.version}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>

There are a couple of things to point out here.

First, spring-boot-starter-security-oauth2-authorization-server brings in the Spring Authorization Server support that provides the OAuth2/OIDC protocol endpoints. That means we do not have to hand-code token endpoints, metadata endpoints, JWK endpoints, or the protocol filter chain.

Spring Boot 4 is more modular than the 3.x line. For this servlet application, the web starter is now spring-boot-starter-webmvc, the test slice starter is spring-boot-starter-webmvc-test, and the authorization-server starter lives under the security naming scheme. The Testcontainers 2 artifacts also use the testcontainers-* artifact names shown above. Letting the Spring Boot parent manage the versions keeps Spring Framework, Spring Security, Liquibase, Jackson, Hibernate, and Testcontainers aligned.

Second, the Oracle UCP starter gives us Oracle Universal Connection Pool integration through Spring Boot configuration. That is useful for real services because the database connection pool is not an afterthought – it is part of the application runtime.

Third, Liquibase owns the schema. Hibernate validates the schema, but Liquibase creates it. That is usually the right split for applications where the database is important enough to be managed deliberately.

Configure Oracle Database and Liquibase

The application uses two database identities:

  • A Liquibase/admin identity that can create and update the application schema.
  • A runtime schema user named USER_REPO that the application uses for normal database access.

Here is the application configuration:

server:
port: 8080
spring:
application:
name: @project.artifactId@
cloud:
# Discovery is opt-in so local runs and tests do not attempt to register.
discovery:
enabled: ${EUREKA_CLIENT_ENABLED:false}
threads:
virtual:
enabled: true
datasource:
# Runtime connections authenticate directly as the application schema user.
url: ${AZN_DATASOURCE_URL:${SPRING_DATASOURCE_URL:}}
username: ${AZN_USER_REPO_USERNAME:USER_REPO}
password: ${AZN_USER_REPO_PASSWORD}
driver-class-name: oracle.jdbc.OracleDriver
type: oracle.ucp.jdbc.PoolDataSource
oracleucp:
connection-factory-class-name: oracle.jdbc.pool.OracleDataSource
connection-pool-name: AznServerConnectionPool
initial-pool-size: 15
min-pool-size: 10
max-pool-size: 30
jpa:
# Keep database access inside service/controller methods, not view rendering.
open-in-view: false
hibernate:
# Liquibase owns schema changes; Hibernate only validates the result.
ddl-auto: validate
properties:
hibernate:
format_sql: true
show-sql: false
liquibase:
# Liquibase uses the admin account directly so it can create USER_REPO.
change-log: classpath:db/changelog/controller.yaml
url: ${AZN_DATASOURCE_URL:${SPRING_DATASOURCE_URL:}}
user: ${AZN_LIQUIBASE_USERNAME:${AZN_DATASOURCE_USERNAME:${SPRING_LIQUIBASE_USER:${SPRING_DATASOURCE_USERNAME:}}}}
password: ${AZN_LIQUIBASE_PASSWORD:${AZN_DATASOURCE_PASSWORD:${SPRING_LIQUIBASE_PASSWORD:${SPRING_DATASOURCE_PASSWORD:}}}}
parameters:
userRepoPassword: ${AZN_USER_REPO_PASSWORD}
enabled: ${RUN_LIQUIBASE:true}
azn:
bootstrap-users:
admin-password: ${ORACTL_ADMIN_PASSWORD:}
user-password: ${ORACTL_USER_PASSWORD:}
management:
endpoint:
health:
show-details: when_authorized
roles: ACTUATOR
endpoints:
web:
exposure:
# Keep actuator surface small; SecurityConfig protects non-health/info endpoints.
include: health,info,prometheus
eureka:
instance:
hostname: ${spring.application.name}
preferIpAddress: true
client:
# Supported for deployments, disabled by default for local/test startup.
service-url:
defaultZone: ${EUREKA_SERVER_ADDRESS:http://localhost:8761/eureka/}
fetch-registry: true
register-with-eureka: true
enabled: ${EUREKA_CLIENT_ENABLED:false}
# Logging
logging:
level:
org.springframework.web: INFO
org.springframework.security: INFO
oracle.obaas.aznserver: INFO

I like this arrangement because the runtime user is not the same as the schema-management user. Liquibase gets the elevated account it needs to create and manage USER_REPO, and the running application connects as USER_REPO. That is a clean security boundary.

Oracle UCP is configured as the datasource type:

    type: oracle.ucp.jdbc.PoolDataSource
    oracleucp:
      connection-factory-class-name: oracle.jdbc.pool.OracleDataSource
      connection-pool-name: AznServerConnectionPool
      initial-pool-size: 15
      min-pool-size: 10
      max-pool-size: 30

That gives us Oracle-aware connection pooling with very little Spring code. We get the operational benefit of a pool that is meant for Oracle Database, while still configuring it in the usual Spring Boot way.

Create the schema with Liquibase

The changelog controller is small:

---
databaseChangeLog:
- include:
file: classpath:db/changelog/dbuser.sql
- include:
file: classpath:db/changelog/table.sql
- include:
file: classpath:db/changelog/trigger.sql

The first changelog creates and maintains the USER_REPO database user:

-- liquibase formatted sql
-- changeset az_admin:initial_user endDelimiter:/ runAlways:true runOnChange:true
DECLARE
l_user VARCHAR2(255);
l_tblspace VARCHAR2(255);
BEGIN
BEGIN
SELECT username INTO l_user FROM DBA_USERS WHERE USERNAME='USER_REPO';
EXCEPTION WHEN no_data_found THEN
EXECUTE IMMEDIATE 'CREATE USER "USER_REPO" IDENTIFIED BY "${userRepoPassword}"';
END;
EXECUTE IMMEDIATE 'ALTER USER "USER_REPO" IDENTIFIED BY "${userRepoPassword}" ACCOUNT UNLOCK';
SELECT default_tablespace INTO l_tblspace FROM dba_users WHERE username = 'USER_REPO';
EXECUTE IMMEDIATE 'ALTER USER "USER_REPO" QUOTA UNLIMITED ON ' || l_tblspace;
EXECUTE IMMEDIATE 'GRANT CONNECT TO "USER_REPO"';
EXECUTE IMMEDIATE 'GRANT RESOURCE TO "USER_REPO"';
EXECUTE IMMEDIATE 'ALTER USER "USER_REPO" DEFAULT ROLE CONNECT,RESOURCE';
END;
/
--rollback drop user "USER_REPO" cascade;

The next changelog creates the user table:

-- liquibase formatted sql
-- changeset az_admin:initial_table
CREATE TABLE USER_REPO.USERS
(
USER_ID NUMBER GENERATED ALWAYS AS IDENTITY (START WITH 1 CACHE 20),
PASSWORD VARCHAR2(255 CHAR) NOT NULL,
ROLES VARCHAR2(255 CHAR) NOT NULL,
USERNAME VARCHAR2(255 CHAR) NOT NULL,
CREATED_ON TIMESTAMP DEFAULT SYSDATE,
CREATED_BY VARCHAR2 (100) DEFAULT COALESCE(
REGEXP_SUBSTR(SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER'),'^[^:]*'),
SYS_CONTEXT('USERENV','SESSION_USER')),
UPDATED_ON TIMESTAMP ,
UPDATED_BY VARCHAR2 (255),
PRIMARY KEY (USER_ID),
CONSTRAINT USERNAME_UQ UNIQUE (USERNAME)
) LOGGING;
COMMENT ON TABLE USER_REPO.USERS is 'Application user repository for OAuth2/OIDC user management';
COMMENT ON COLUMN USER_REPO.USERS.PASSWORD is 'BCrypt hash of the application user password; never store cleartext';
ALTER TABLE USER_REPO.USERS ADD EMAIL VARCHAR2(255 CHAR) NULL;
ALTER TABLE USER_REPO.USERS ADD OTP VARCHAR2(255 CHAR) NULL;
COMMENT ON COLUMN USER_REPO.USERS.OTP is 'BCrypt hash of the one-time password; never store cleartext';
--rollback DROP TABLE USER_REPO.USERS;

There are some good Oracle Database features doing useful work here:

  • GENERATED ALWAYS AS IDENTITY gives us database-managed user ids.
  • The unique constraint protects usernames at the database level.
  • Column comments document sensitive columns right where they live.
  • The table belongs to the USER_REPO schema, not to the application admin user.

Finally, we add a small audit trigger:

-- liquibase formatted sql
-- changeset az_admin:initial_trigger endDelimiter:/
CREATE OR REPLACE EDITIONABLE TRIGGER "USER_REPO"."AUDIT_TRG" BEFORE
UPDATE ON USER_REPO.USERS FOR EACH ROW
BEGIN
:NEW.UPDATED_ON := SYSDATE;
:NEW.UPDATED_BY := COALESCE(REGEXP_SUBSTR(SYS_CONTEXT('USERENV', 'CLIENT_IDENTIFIER'), '^[^:]*'), SYS_CONTEXT('USERENV', 'SESSION_USER'));
END;
/
--rollback DROP TRIGGER "USER_REPO"."AUDIT_TRG";

This is a nice example of letting the database enforce something that belongs in the database. Every update gets audit fields set consistently, whether the update came from this Spring application or from another controlled path later.

Map the Oracle table to a JPA entity

Now we need a JPA entity for the USER_REPO.USERS table.

// Copyright (c) 2023, 2026, Oracle and/or its affiliates.
package oracle.obaas.aznserver.model;
import com.fasterxml.jackson.annotation.JsonProperty;
import jakarta.persistence.Column;
import jakarta.persistence.Entity;
import jakarta.persistence.GeneratedValue;
import jakarta.persistence.GenerationType;
import jakarta.persistence.Id;
import jakarta.persistence.Table;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
@Entity
@Table(name = "users", schema = "user_repo")
@Data
@AllArgsConstructor
@NoArgsConstructor
@ToString(exclude = {"password", "otp"})
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "USER_ID")
private Long userId;
@Column(name = "USERNAME", nullable = false)
private String username;
/**
* Stores the BCrypt hash that is persisted in USER_REPO.USERS.PASSWORD.
* Cleartext passwords may be accepted at API boundaries, but they must be
* encoded before this entity is saved.
*/
@JsonProperty(access = JsonProperty.Access.WRITE_ONLY)
@Column(name = "PASSWORD", nullable = false, length = 255)
private String password;
@Column(name = "ROLES", nullable = false)
private String roles;
@Column(name = "EMAIL")
private String email;
@JsonProperty(access = JsonProperty.Access.WRITE_ONLY)
@Column(name = "OTP")
private String otp;
/**
* Create a user object.
*
* @param username The username.
* @param password The encoded password hash for persistence.
* @param roles The roles assigned the user, as a comma separated list, e.g.
* "ROLE_USER,ROLE_ADMIN".
*/
public User(String username, String password, String roles) {
this.username = username;
this.password = password;
this.roles = roles;
}
// This constructor should only be used during testing with a mock repository,
// when we need to set the id manually
public User(long userId, String username, String password, String roles) {
this(username, password, roles);
this.userId = userId;
}
/**
* Create a user object.
*
* @param username The username.
* @param password The encoded password hash for persistence.
* @param roles The roles assigned the user, as a comma separated list, e.g.
* "ROLE_USER,ROLE_ADMIN".
* @param email The email associated with user account.
*/
public User(String username, String password, String roles, String email) {
this(username, password, roles);
this.email = email;
}
}

There are two small but important security choices in this class.

First, password and otp are write-only for JSON serialization. That means the API can accept these values in request bodies, but it will not serialize them back into responses.

Second, Lombok’s @ToString excludes password and otp. That helps prevent secrets from being accidentally written into logs.

The repository is exactly what we want from Spring Data JPA: small, declarative, and focused on the queries the service needs.

// Copyright (c) 2022, 2023, Oracle and/or its affiliates.
package oracle.obaas.aznserver.repository;
import java.util.List;
import java.util.Optional;
import oracle.obaas.aznserver.model.User;
import org.springframework.data.jpa.repository.JpaRepository;
public interface UserRepository extends JpaRepository<User, Long> {
Optional<User> findByUsername(String username);
Optional<User> findByUsernameIgnoreCase(String username);
Optional<User> findByUserId(Long userId);
List<User> findUsersByUsernameStartsWithIgnoreCase(String username);
Optional<User> findByEmailIgnoreCase(String email);
}

This is one of the places where Spring Data JPA shines. The method names communicate intent, Spring implements the queries, and the application code stays readable.

Adapt the database user to Spring Security

Spring Security authenticates with UserDetails. Our database user is a domain object, so we wrap it in a SecurityUser.

// Copyright (c) 2022, 2026, Oracle and/or its affiliates.
package oracle.obaas.aznserver.model;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import org.apache.commons.lang3.StringUtils;
import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.authority.SimpleGrantedAuthority;
import org.springframework.security.core.userdetails.UserDetails;
public class SecurityUser implements UserDetails {
private final User user;
public SecurityUser(User user) {
this.user = user;
}
@Override
public String getUsername() {
return user.getUsername();
}
@Override
public String getPassword() {
return user.getPassword();
}
@Override
public Collection<? extends GrantedAuthority> getAuthorities() {
if (StringUtils.isBlank(user.getRoles())) {
return List.of();
}
return Arrays.stream(user
.getRoles()
.split(","))
.map(SimpleGrantedAuthority::new)
.toList();
}
@Override
public boolean isAccountNonExpired() {
return true;
}
@Override
public boolean isAccountNonLocked() {
return true;
}
@Override
public boolean isCredentialsNonExpired() {
return true;
}
@Override
public boolean isEnabled() {
return true;
}
}

Then we create a UserDetailsService backed by the JPA repository:

// Copyright (c) 2022, 2026, Oracle and/or its affiliates.
package oracle.obaas.aznserver.service;
import oracle.obaas.aznserver.model.SecurityUser;
import oracle.obaas.aznserver.repository.UserRepository;
import org.springframework.security.core.userdetails.UserDetails;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.core.userdetails.UsernameNotFoundException;
import org.springframework.stereotype.Service;
@Service
public class JpaUserDetailsService implements UserDetailsService {
private final UserRepository userRepository;
public JpaUserDetailsService(UserRepository userRepository) {
this.userRepository = userRepository;
}
@Override
public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
SecurityUser user = userRepository
.findByUsername(username)
.map(SecurityUser::new)
.orElseThrow(() -> new UsernameNotFoundException("Authentication failed"));
return user;
}
}

This is the bridge between Oracle Database and Spring Security. Once this service exists, Spring Security can authenticate users stored in USER_REPO.USERS.

Configure Spring Security and Spring Authorization Server

The security configuration is the heart of the application. It does several things:

  • Creates a role hierarchy.
  • Enables method security.
  • Creates a dedicated authorization-server filter chain.
  • Creates a separate actuator filter chain.
  • Creates a stateless API filter chain.
  • Provides password encoding.
  • Provides development/test signing keys.
  • Optionally creates a local OAuth client.

Here is the role hierarchy:

    public static final String ROLE_HIERARCHY = "ROLE_ADMIN > ROLE_USERn"
            + "ROLE_ADMIN > ROLE_CONFIG_EDITORn"
            + "ROLE_CONFIG_EDITOR > ROLE_USER";

    /**
     * Configure a role hierarchy such that ADMIN "includes"/implies USER.
     * 
     * @return the hierarchy.
     */

    @Bean
    public RoleHierarchy roleHierarchy() {
        return RoleHierarchyImpl.fromHierarchy(ROLE_HIERARCHY);
    }

    /**
     * Configure method security to use the role hierarchy.
     * 
     * @param roleHierarchy injected by Spring.
     * @return The MethodSecurityExpressionHandler.
     */
    @Bean
    public MethodSecurityExpressionHandler methodSecurityExpressionHandler(RoleHierarchy roleHierarchy) {
        DefaultMethodSecurityExpressionHandler expressionHandler = new DefaultMethodSecurityExpressionHandler();
        expressionHandler.setRoleHierarchy(roleHierarchy);
        return expressionHandler;
    }

Role hierarchy is one of those Spring Security features that is easy to miss but very useful. If an administrator should also be treated as a user, we do not have to duplicate every role check everywhere. We can teach Spring Security that ROLE_ADMIN includes ROLE_USER.

The authorization server gets its own filter chain:

    /**
     * Authorization Server endpoints use their own filter chain so OAuth protocol
     * handling does not inherit API-specific stateless settings.
     *
     * @param http HttpSecurity injected by Spring.
     * @return the SecurityFilterChain.
     * @throws Exception if unable to create the chain.
     */
    @Bean
    @Order(1)
    public SecurityFilterChain authorizationServerSecurityFilterChain(HttpSecurity http)
            throws Exception {
        log.debug("In authorizationServerSecurityFilterChain");
        OAuth2AuthorizationServerConfigurer authorizationServerConfigurer =
                new OAuth2AuthorizationServerConfigurer();

        http
            .securityMatcher(authorizationServerConfigurer.getEndpointsMatcher())
            .with(authorizationServerConfigurer, authorizationServer ->
                authorizationServer.oidc(Customizer.withDefaults()))
            .authorizeHttpRequests((authorize) -> authorize
                .requestMatchers("/.well-known/**", "/oauth2/jwks").permitAll()
                .anyRequest().authenticated())
            .csrf((csrf) -> csrf.ignoringRequestMatchers(authorizationServerConfigurer.getEndpointsMatcher()))
            .exceptionHandling((exceptions) -> exceptions.defaultAuthenticationEntryPointFor(
                new LoginUrlAuthenticationEntryPoint("/login"),
                new MediaTypeRequestMatcher(MediaType.TEXT_HTML)));
        return http.build();
    }

This is where Spring Authorization Server does a lot of heavy lifting. The endpoints matcher identifies the protocol endpoints, OIDC support is enabled, and the well-known metadata and JWK endpoints are allowed anonymously.

In the Spring Boot 4 version, the authorization-server configurer comes from the Spring Security 7 package org.springframework.security.config.annotation.web.configurers.oauth2.server.authorization. The older static factory used by the Boot 3 version is gone, so the sample constructs the configurer directly and then applies it to HttpSecurity.

The user-management API has a different shape. It is stateless and uses HTTP Basic:

    /**
     * Create a SecurityFilterChain for the user-management API.
     * @param http HttpSecurity injected by Spring. 
     * @param userDetailsService the JPA-backed user details service.
     * @return the SecurityFilterChain.
     * @throws Exception if unable to create the chain.
     */
    @Bean
    @Order(3)
    public SecurityFilterChain apiSecurityFilterChain(HttpSecurity http, UserDetailsService userDetailsService)
            throws Exception {
        log.debug("In apiSecurityFilterChain");
        http
            .securityMatcher("/user/api/**", "/error/**")
            .authorizeHttpRequests((authorize) -> authorize
                .requestMatchers("/error/**").permitAll()
                .requestMatchers("/user/api/v1/ping").permitAll()
                .requestMatchers("/user/api/v1/forgot").permitAll()
                .anyRequest().authenticated()
            )
            .sessionManagement(session ->
                session.sessionCreationPolicy(SessionCreationPolicy.STATELESS))
            .httpBasic(Customizer.withDefaults())
            .userDetailsService(userDetailsService);
        // The user-management API is stateless and does not use browser sessions or cookies.
        http.csrf(csrf -> csrf.disable());
        return http.build();
    }

The separation between the authorization-server chain and the API chain matters. The OAuth2/OIDC endpoints are protocol endpoints. The user API is a REST API. They have different security needs, so they get different chains.

The authentication provider uses our JPA-backed user details service and a BCrypt password encoder:

    /**
     * Create an Authentication Provider for our UserDetailsService.
     * @param userDetailsService the JPA-backed user details service.
     * @param passwordEncoder password encoder for stored password hashes.
     * @return the AuthenticationProvider.
     */
    @Bean
    public DaoAuthenticationProvider authenticationProvider(UserDetailsService userDetailsService,
            PasswordEncoder passwordEncoder) {
        DaoAuthenticationProvider auth = new DaoAuthenticationProvider(userDetailsService);
        auth.setPasswordEncoder(passwordEncoder);
        return auth;
    }

And the password encoder is:

    @Bean
    public PasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder();
    }

Passwords in the API enter as clear text at the boundary, but they are stored as BCrypt hashes in Oracle Database. That is exactly the line we want: clear text only at the edge, hashes at rest.

For local development and tests, the application can create an opt-in registered client:

    /**
     * Create an opt-in local client for test and developer-only contexts.
     *
     * Production deployments should configure registered clients explicitly using
     * Spring Boot's authorization-server client properties.
     *
     * @param passwordEncoder password encoder for the client secret.
     * @param clientSecret configured client secret.
     * @return a local RegisteredClientRepository.
     */
    @Bean
    @ConditionalOnMissingBean
    @ConditionalOnProperty(prefix = "azn.authorization-server.default-client", name = "enabled",
            havingValue = "true")
    public RegisteredClientRepository localRegisteredClientRepository(PasswordEncoder passwordEncoder,
            @Value("${azn.authorization-server.default-client.secret:}") String clientSecret) {
        if (!StringUtils.hasText(clientSecret)) {
            throw new IllegalStateException("azn.authorization-server.default-client.secret must be set when "
                    + "azn.authorization-server.default-client.enabled=true");
        }
        RegisteredClient registeredClient = RegisteredClient.withId(UUID.randomUUID().toString())
                .clientId("azn-local-client")
                .clientSecret(passwordEncoder.encode(clientSecret))
                .clientAuthenticationMethod(ClientAuthenticationMethod.CLIENT_SECRET_BASIC)
                .authorizationGrantType(AuthorizationGrantType.CLIENT_CREDENTIALS)
                .authorizationGrantType(AuthorizationGrantType.AUTHORIZATION_CODE)
                .authorizationGrantType(AuthorizationGrantType.REFRESH_TOKEN)
                .redirectUri("http://127.0.0.1:8080/login/oauth2/code/azn-local-client")
                .scope(OidcScopes.OPENID)
                .scope("user.read")
                .clientSettings(ClientSettings.builder().requireProofKey(true).build())
                .build();
        return new InMemoryRegisteredClientRepository(registeredClient);
    }

Notice that this is opt-in. That is intentional. Local convenience is useful, but production registered clients should be configured deliberately.

The JWK source is also local by default:

    /**
     * Provide process-local signing keys for development and tests.
     *
     * Production deployments should replace this bean with persistent key material
     * so tokens remain verifiable across restarts and rolling deploys.
     *
     * @return the JWK source.
     */
    @Bean
    @ConditionalOnMissingBean
    public JWKSource<SecurityContext> jwkSource() {
        log.warn("Using process-local generated RSA signing keys. Configure a persistent JWKSource bean for "
                + "production so issued tokens remain verifiable across restarts and rolling deploys.");
        RSAKey rsaKey = generateRsa();
        JWKSet jwkSet = new JWKSet(rsaKey);
        return (jwkSelector, securityContext) -> jwkSelector.select(jwkSet);
    }

That is fine for development and tests. For production, you would provide persistent signing key material so tokens remain verifiable across restarts and rolling deployments.

Bootstrap the first users

An authorization server needs at least one user to get started. This application creates three bootstrap users on startup by default:

  • obaas-admin
  • obaas-user
  • obaas-config

The initializer is in the main application class:

    @Bean
    @ConditionalOnProperty(prefix = "azn.bootstrap-users", name = "enabled", havingValue = "true",
            matchIfMissing = true)
    ApplicationRunner userStoreInitializer(UserRepository users, PasswordEncoder passwordEncoder,
            @Value("${azn.bootstrap-users.admin-password:}") String adminPassword,
            @Value("${azn.bootstrap-users.user-password:}") String userPassword) {
        return args -> initUserStore(users, passwordEncoder, adminPassword, userPassword);
    }

And the implementation creates missing users with BCrypt-encoded passwords:

    public static void initUserStore(UserRepository users, PasswordEncoder encoder,
            String adminPassword, String userPassword) {
        log.debug("ENTER initUserStore");

        String obaasAdminPwd = adminPassword;
        String obaasUserPwd = userPassword;
        String obaasConfigPwd = obaasUserPwd;

        // Check for obaas-user, if not existent create the user
        if (users.findByUsername(OBAAS_USER).isEmpty()) {
            log.debug("Creating user obaas-user");

            obaasUserPwd = bootstrapPassword("azn.bootstrap-users.user-password", obaasUserPwd);

            users.saveAndFlush(new User(OBAAS_USER, encoder.encode(obaasUserPwd),
                    "ROLE_USER"));
        }

        // Check for obaas-admin, if not existent create the user
        Optional<User> adminUser = users.findByUsername(OBAAS_ADMIN);
        if (adminUser.isEmpty()) {
            log.debug("Creating user obaas-admin");

            obaasAdminPwd = bootstrapPassword("azn.bootstrap-users.admin-password", obaasAdminPwd);

            users.saveAndFlush(new User(OBAAS_ADMIN, encoder.encode(obaasAdminPwd),
                    "ROLE_ADMIN,ROLE_CONFIG_EDITOR,ROLE_USER"));
        }

        // Check for obaas-config, if not existent create the user with the same pwd as
        // obaas-user
        if (users.findByUsernameIgnoreCase(OBAAS_CONFIG).isEmpty()) {
            log.debug("Creating user obaas-config");

            obaasConfigPwd = bootstrapPassword("azn.bootstrap-users.user-password", obaasConfigPwd);

            users.saveAndFlush(new User(OBAAS_CONFIG, encoder.encode(obaasConfigPwd),
                    "ROLE_CONFIG_EDITOR,ROLE_USER"));
        }
    }

The bootstrap passwords come from external configuration. If bootstrap users are enabled and the password properties are missing, startup fails:

    private static String bootstrapPassword(String propertyName, String configuredPassword) {
        if (StringUtils.isNotBlank(configuredPassword)) {
            return configuredPassword;
        }
        throw new IllegalStateException(propertyName + " must be set when azn.bootstrap-users.enabled=true");
    }

That is much better than quietly creating default passwords.

Build the user-management API

The API is a normal Spring REST controller:

@RestController
@RequestMapping("/user/api/v1")
@Slf4j
public class DbUserRepoController {
public static final String ROLE_ADMIN = "ADMIN";
private static final String isAdminUser = "hasRole('ADMIN')";
private static final String isUser = "hasRole('USER')";
private static final Pattern PASSWORD_PATTERN =
Pattern.compile("^(?=.*[?!$%^*\-_])(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z]).{12,}$");
final UserRepository userRepository;
final PasswordEncoder passwordEncoder;
public DbUserRepoController(UserRepository userRepository, PasswordEncoder passwordEncoder) {
this.userRepository = userRepository;
this.passwordEncoder = passwordEncoder;
}

The connect endpoint is a simple authenticated check:

    @PreAuthorize("hasAnyRole('ADMIN','USER','CONFIG_EDITOR')")
    @GetMapping("/connect")
    public ResponseEntity<String> connect() {
        Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
        UserDetails userDetails = (UserDetails) authentication.getPrincipal();
        String authorities = userDetails.getAuthorities().toString();

        log.debug("/connect Username: {}", authentication.getName());
        log.debug("/connect Authorities: {}", userDetails.getAuthorities());
        log.debug("/connect Details: {}", authentication.getDetails());

        return new ResponseEntity<>(authorities, HttpStatus.OK);
    }

Creating a user is restricted to admins:

    @PreAuthorize(isAdminUser)
    @PostMapping("/createUser")
    public ResponseEntity<?> createUser(@RequestBody User user) {

        // If user exists return HTTP Status 409.
        Optional<User> checkUser = userRepository.findByUsernameIgnoreCase(user.getUsername());
        if (checkUser.isPresent()) {
            log.debug("User exists");
            return new ResponseEntity<>("User already exists", HttpStatus.CONFLICT);
        }

        if (!isValidPassword(user.getPassword())) {
            return new ResponseEntity<>("Password does not meet complexity requirements",
                    HttpStatus.UNPROCESSABLE_ENTITY);
        }

        if (StringUtils.isNotEmpty(user.getEmail())) {
            Optional<User> userAlreadyAssociatedWithEMail = userRepository.findByEmailIgnoreCase(user.getEmail());
            if (userAlreadyAssociatedWithEMail.isPresent()) {
                log.debug("User exists");
                return new ResponseEntity<>("Another user exists with same email", HttpStatus.CONFLICT);
            }
        }

        // Validate roles in RequestBody
        boolean hasValidRole = validateRole(user);
        log.debug("Valid role: {}", hasValidRole);

        // If Valid role create the user else send HTTP 422
        if (hasValidRole) {
            try {
                User users = userRepository.save(new User(
                        user.getUsername(),
                        passwordEncoder.encode(user.getPassword()),
                        user.getRoles(), user.getEmail()));
                return new ResponseEntity<>(users, HttpStatus.CREATED);
            } catch (Exception e) {
                return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build();
            }
        } else {
            return ResponseEntity.status(HttpStatus.UNPROCESSABLE_ENTITY).build();
        }
    }

There are a few important details in here:

  • Usernames are checked case-insensitively.
  • Duplicate emails are rejected.
  • Password complexity is enforced before saving.
  • Roles are validated against the enum.
  • Passwords are encoded before the entity is persisted.

The use of ResponseEntity.status(...).build() is intentional in the Boot 4 version. Spring Framework 7 adds ResponseEntity constructors that make new ResponseEntity<>(null, status) ambiguous, so empty responses should use the builder API.

The password validation is intentionally small and explicit:

    private boolean isValidPassword(String password) {
        return StringUtils.isNotBlank(password) && PASSWORD_PATTERN.matcher(password).matches();
    }

The role validation uses the enum:

    private boolean validateRole(User user) {
        try {
            if (StringUtils.isBlank(user.getRoles())) {
                return false;
            }
            Arrays.stream(user.getRoles().toUpperCase()
                    .replace("[", "")
                    .replace("]", "")
                    .replace(" ", "")
                    .split(","))
                    .map(UserRoles::valueOf)
                    .toList();
            return true;
        } catch (IllegalArgumentException illegalArgumentException) {
            return false;
        }
    }

Password changes are available to admins or to the user changing their own password:

    @PreAuthorize(isUser)
    @PutMapping("/updatePassword")
    public ResponseEntity<User> changePassword(@RequestBody UserInfoDto userInfo) {

        if (!isValidPassword(userInfo.password())) {
            return ResponseEntity.status(HttpStatus.UNPROCESSABLE_ENTITY).build();
        }

        // Check if the user is a user with ADMIN
        SecurityContext securityContext = SecurityContextHolder.getContext();
        boolean isAdminUser = false;

        for (GrantedAuthority role : securityContext.getAuthentication().getAuthorities()) {
            if (role.getAuthority().contains(ROLE_ADMIN)) {
                isAdminUser = true;
            }
        }

        // TODO: Must update the correspondent secret??

        // If the username of the authenticated user matches the requestbody username,
        // or if it is a user with ROLE_ADMIN
        if ((userInfo.username().compareTo(securityContext.getAuthentication().getName()) == 0) || isAdminUser) {
            try {
                Optional<User> user = userRepository.findByUsername(userInfo.username());
                if (user.isPresent()) {
                    user.get().setPassword(passwordEncoder.encode(userInfo.password()));
                    userRepository.saveAndFlush(user.get());
                    return ResponseEntity.status(HttpStatus.OK).build();
                } else {
                    return ResponseEntity.status(HttpStatus.NO_CONTENT).build();
                }
            } catch (Exception e) {
                return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build();
            }
        } else {
            return ResponseEntity.status(HttpStatus.FORBIDDEN).build();
        }
    }

And the forgot-password flow keeps OTP values hashed too:

    @PostMapping("/forgot")
    public ResponseEntity<UserInfoDto> createOTP(@RequestBody(required = true) User inUser) {
        if (StringUtils.isNotEmpty(inUser.getUsername()) && StringUtils.isNotEmpty(inUser.getOtp())) {
            try {
                Optional<User> user = userRepository.findByUsernameIgnoreCase(inUser.getUsername());
                if (user.isEmpty()) {
                    return new ResponseEntity<>(HttpStatus.NO_CONTENT);
                }
                user.get().setOtp(passwordEncoder.encode(inUser.getOtp()));
                userRepository.saveAndFlush(user.get());
                return ResponseEntity.status(HttpStatus.OK).build();
            } catch (Exception e) {
                return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build();
            }

        } else {
            return ResponseEntity.status(HttpStatus.UNPROCESSABLE_ENTITY).build();
        }
    }

The reset endpoint compares the provided OTP against the stored BCrypt hash:

    @PutMapping("/forgot")
    public ResponseEntity<?> reset(@RequestBody(required = true) User inUser) {
        if (StringUtils.isNotEmpty(inUser.getUsername()) && StringUtils.isNotEmpty(inUser.getOtp())
                && StringUtils.isNotEmpty(inUser.getPassword())) {
            if (!isValidPassword(inUser.getPassword())) {
                return new ResponseEntity<>("Password does not meet complexity requirements",
                        HttpStatus.UNPROCESSABLE_ENTITY);
            }
            try {
                Optional<User> user = userRepository.findByUsernameIgnoreCase(inUser.getUsername());
                if (user.isEmpty()) {
                    return new ResponseEntity<>("User does not exist", HttpStatus.NO_CONTENT);
                }

                if (StringUtils.isEmpty(user.get().getOtp())) {
                    return new ResponseEntity<>("OTP not  generated.", HttpStatus.CONFLICT);
                }

                if (StringUtils.isEmpty(user.get().getPassword())) {
                    return new ResponseEntity<>("Password not  provided.", HttpStatus.CONFLICT);
                }

                if (!passwordEncoder.matches(inUser.getOtp(), user.get().getOtp())) {
                    return new ResponseEntity<>("OTP does not match.", HttpStatus.CONFLICT);
                }

                if (passwordEncoder.matches(inUser.getPassword(), user.get().getPassword())) {
                    return new ResponseEntity<>("Password can not be same as previous.", HttpStatus.CONFLICT);
                }

                user.get().setOtp(null);
                user.get().setPassword(passwordEncoder.encode(inUser.getPassword()));
                userRepository.saveAndFlush(user.get());

                return new ResponseEntity<>("Password successfully changed.",
                        HttpStatus.OK);
            } catch (Exception e) {
                return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build();
            }

        } else {
            return ResponseEntity.status(HttpStatus.UNPROCESSABLE_ENTITY).build();
        }
    }

Again, the pattern is the same: accept secret values at the boundary, compare or encode them through Spring Security’s PasswordEncoder, and do not disclose them in responses.

Run Locally

Now let’s run the finished application. This section follows the Run Locally flow from the repository README, because it is the quickest way to prove that Oracle Database is available, Liquibase can create the schema, and the bootstrap users can be created.

Start with an Oracle database that the Liquibase admin user can connect to. For a disposable local Oracle database, you can use the same image family as the integration tests:

docker run --name azn-oracle --rm -p 1521:1521
-e ORACLE_PASSWORD='LocalSystem123!'
gvenzl/oracle-free:23.26.1-slim-faststart

In another terminal, configure the app. The USER_REPO password is the Oracle schema password that Liquibase assigns to the runtime database user. The bootstrap passwords are application user passwords and will be stored as BCrypt hashes in USER_REPO.USERS.

export AZN_DATASOURCE_URL='jdbc:oracle:thin:@//localhost:1521/FREEPDB1'
export AZN_LIQUIBASE_USERNAME='SYSTEM'
export AZN_LIQUIBASE_PASSWORD='LocalSystem123!'
export AZN_USER_REPO_USERNAME='USER_REPO'
export AZN_USER_REPO_PASSWORD='LocalUserRepo123!'
export ORACTL_ADMIN_PASSWORD='LocalAdmin123!'
export ORACTL_USER_PASSWORD='LocalUser123!'
export AZN_AUTHORIZATION_SERVER_DEFAULT_CLIENT_ENABLED=true
export AZN_AUTHORIZATION_SERVER_DEFAULT_CLIENT_SECRET='LocalClient123!'

Run the application:

mvn spring-boot:run

The app listens on http://localhost:8080.

Smoke Test API

Now we can walk through the finished code using the Smoke Test API flow from the README. These calls verify that the app is running, the Authorization Server endpoints are exposed, users can authenticate, and an admin can create and update users.

Before we call the API, set up the shell variables used by the walkthrough:

export BASE_URL='http://localhost:8080'
export ADMIN_USER='obaas-admin'
export ADMIN_PASSWORD='LocalAdmin123!'
export TEST_USER='readme-user'
export TEST_PASSWORD='ReadmeUser123!'
export TEST_PASSWORD_2='ReadmeUser456!'
export TEST_EMAIL='[email protected]'

First, check the anonymous endpoints:

curl -i "$BASE_URL/actuator/health"
curl -i "$BASE_URL/user/api/v1/ping"
curl -i "$BASE_URL/.well-known/oauth-authorization-server"
curl -i "$BASE_URL/oauth2/jwks"

This verifies the basic shape of the running service. Actuator health is available, the unauthenticated ping endpoint works, the authorization server metadata is published, and the JWK endpoint is available for token verification.

Now authenticate with the bootstrap admin user:

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD" "$BASE_URL/user/api/v1/connect"
curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD" "$BASE_URL/user/api/v1/pingadmin"

The admin user was inserted during startup by the bootstrap initializer. The password came from configuration and was stored in Oracle Database as a BCrypt hash.

If you are rerunning this sequence, remove the sample user first:

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
-X DELETE
"$BASE_URL/user/api/v1/deleteUsername?username=$TEST_USER"

Create a user. Passwords must be at least 12 characters and include uppercase, lowercase, a number, and one of ?!$%^*-_.

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","password":"$TEST_PASSWORD","roles":"ROLE_USER","email":"$TEST_EMAIL"}"
"$BASE_URL/user/api/v1/createUser"

This call exercises several things at once. Spring Security authorizes the admin request, the controller validates the role and password, the password is encoded with BCrypt, and JPA stores the user in USER_REPO.USERS.

Verify that the new user can authenticate and use a user endpoint:

curl -i -u "$TEST_USER:$TEST_PASSWORD" "$BASE_URL/user/api/v1/connect"
curl -i -u "$TEST_USER:$TEST_PASSWORD" "$BASE_URL/user/api/v1/pinguser"

Now find the user as an admin. Password and OTP fields are write-only and should not appear in the response body.

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
"$BASE_URL/user/api/v1/findUser?username=$TEST_USER"

That response is a useful security check. The API can accept sensitive values, but it should not echo them back.

Next, change the user’s role, then authenticate with the same user against the config-editor endpoint:

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
-X PUT
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","roles":"ROLE_CONFIG_EDITOR"}"
"$BASE_URL/user/api/v1/changeRole"
curl -i -u "$TEST_USER:$TEST_PASSWORD" "$BASE_URL/user/api/v1/pingceditor"

That shows the method-security path working. The role stored in Oracle Database changes, Spring Security reads it through the JPA-backed UserDetailsService, and the endpoint authorization follows the updated authorities.

Change the user’s email:

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
-X PUT
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","email":"updated-$TEST_EMAIL"}"
"$BASE_URL/user/api/v1/changeEmail"

Change the user’s password as the user, then authenticate with the new password:

curl -i -u "$TEST_USER:$TEST_PASSWORD"
-X PUT
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","password":"$TEST_PASSWORD_2"}"
"$BASE_URL/user/api/v1/updatePassword"
curl -i -u "$TEST_USER:$TEST_PASSWORD_2" "$BASE_URL/user/api/v1/connect"

Again, the cleartext password only crosses the API boundary. The value stored in Oracle Database is a BCrypt hash.

Exercise the forgot-password flow. The OTP is accepted in the request but is stored as a BCrypt hash and is not disclosed by the lookup endpoint.

curl -i
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","otp":"123456"}"
"$BASE_URL/user/api/v1/forgot"
curl -i "$BASE_URL/user/api/v1/forgot?username=$TEST_USER"
curl -i
-X PUT
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","otp":"123456","password":"ReadmeReset123!"}"
"$BASE_URL/user/api/v1/forgot"

Finally, verify the opt-in local OAuth client with the client credentials flow:

curl -i -u 'azn-local-client:LocalClient123!'
-d 'grant_type=client_credentials'
-d 'scope=user.read'
"$BASE_URL/oauth2/token"

That call hits the Spring Authorization Server token endpoint and should return a bearer access token.

When you are finished, you can clean up the demo user:

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
-X DELETE
"$BASE_URL/user/api/v1/deleteUsername?username=$TEST_USER"

The most useful local endpoints are:

  • GET /actuator/health
  • GET /.well-known/oauth-authorization-server
  • GET /oauth2/jwks
  • GET /user/api/v1/ping

At this point we have an Oracle-backed user repository, Spring Security authentication against that repository, a working user-management API, and Spring Authorization Server issuing tokens.

Test against a real Oracle Database

The integration tests use Testcontainers with Oracle Free:

@Testcontainers
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
abstract class OracleIntegrationTestSupport {
private static final DockerImageName ORACLE_IMAGE =
DockerImageName.parse("gvenzl/oracle-free:23.26.1-slim-faststart");
private static final AtomicInteger POOL_SEQUENCE = new AtomicInteger();
private static final SecureRandom PASSWORD_RANDOM = new SecureRandom();
static final String BOOTSTRAP_PASSWORD = generatedPassword();
static final String USER_REPO_PASSWORD = generatedPassword();
private static final String ORACLE_PASSWORD = generatedPassword();
@Container
static final OracleContainer ORACLE = new OracleContainer(ORACLE_IMAGE)
.withPassword(ORACLE_PASSWORD);

The test support wires Spring Boot to the container:

    static void configureOracleProperties(DynamicPropertyRegistry registry) {
        String poolName = "AznServerOracleIT-" + POOL_SEQUENCE.incrementAndGet();

        registry.add("spring.datasource.url", ORACLE::getJdbcUrl);
        registry.add("spring.datasource.username", () -> "USER_REPO");
        registry.add("spring.datasource.password", () -> USER_REPO_PASSWORD);
        registry.add("spring.datasource.driver-class-name", ORACLE::getDriverClassName);
        registry.add("spring.datasource.type", () -> "oracle.ucp.jdbc.PoolDataSource");
        registry.add("spring.datasource.oracleucp.connection-factory-class-name",
                () -> "oracle.jdbc.pool.OracleDataSource");
        registry.add("spring.datasource.oracleucp.connection-pool-name", () -> poolName);
        registry.add("spring.datasource.oracleucp.initial-pool-size", () -> "1");
        registry.add("spring.datasource.oracleucp.min-pool-size", () -> "1");
        registry.add("spring.datasource.oracleucp.max-pool-size", () -> "4");
        registry.add("spring.liquibase.url", ORACLE::getJdbcUrl);
        registry.add("spring.liquibase.user", () -> "system");
        registry.add("spring.liquibase.password", ORACLE::getPassword);
        registry.add("spring.liquibase.parameters.userRepoPassword", () -> USER_REPO_PASSWORD);
        registry.add("spring.liquibase.enabled", () -> "true");
        registry.add("azn.bootstrap-users.enabled", () -> "true");
        registry.add("azn.bootstrap-users.admin-password", () -> BOOTSTRAP_PASSWORD);
        registry.add("azn.bootstrap-users.user-password", () -> BOOTSTRAP_PASSWORD);
        registry.add("azn.authorization-server.default-client.secret", () -> "TestLocalClientSecret123!");
        registry.add("eureka.client.enabled", () -> "false");
        registry.add("spring.cloud.discovery.enabled", () -> "false");
        registry.add("spring.cloud.service-registry.auto-registration.enabled", () -> "false");
    }

This is a big benefit of the Oracle Testcontainers support. The tests exercise the actual database behavior: Liquibase, schema creation, identity columns, BCrypt hashes stored in the table, and the Spring Boot datasource configuration.

The authorization server integration test verifies metadata, JWKs, and token issuance:

    @Test
    void exposesAuthorizationServerMetadataAndJwks() {
        ResponseEntity<String> metadata = restTemplate.getForEntity(
                url("/.well-known/oauth-authorization-server"), String.class);
        ResponseEntity<String> jwks = restTemplate.getForEntity(url("/oauth2/jwks"), String.class);

        assertThat(metadata.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(metadata.getBody()).contains("authorization_endpoint", "token_endpoint", "jwks_uri");
        assertThat(jwks.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(jwks.getBody()).contains(""keys"");
    }

    @Test
    void issuesClientCredentialsAccessToken() {
        HttpHeaders headers = new HttpHeaders();
        headers.setBasicAuth("integration-client", "integration-secret");
        headers.setContentType(MediaType.APPLICATION_FORM_URLENCODED);
        MultiValueMap<String, String> body = new LinkedMultiValueMap<>();
        body.add("grant_type", "client_credentials");
        body.add("scope", "user.read");

        ResponseEntity<String> response = restTemplate.postForEntity(url("/oauth2/token"),
                new HttpEntity<>(body, headers), String.class);

        assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(response.getBody()).contains("access_token", "Bearer");
    }

And the user API integration test verifies that secrets are not leaked:

    @Test
    void adminCanCreateAndFindUserWithoutLeakingSecrets() {
        TestRestTemplate admin = restTemplate.withBasicAuth("obaas-admin", BOOTSTRAP_PASSWORD);
        Map<String, String> request = Map.of(
                "username", "api-user",
                "password", "StrongPass123!",
                "roles", "ROLE_USER",
                "email", "[email protected]");

        ResponseEntity<String> createResponse = admin.postForEntity(url("/user/api/v1/createUser"),
                request, String.class);
        ResponseEntity<String> findResponse = admin.getForEntity(url("/user/api/v1/findUser?username=api-user"),
                String.class);

        assertThat(createResponse.getStatusCode()).isEqualTo(HttpStatus.CREATED);
        assertThat(createResponse.getBody()).contains("api-user").doesNotContain("StrongPass123!");
        assertThat(findResponse.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(findResponse.getBody())
                .contains("api-user")
                .doesNotContain("StrongPass123!")
                .doesNotContain("otp");
        assertThat(userRepository.findByUsername("api-user"))
                .hasValueSatisfying(user -> {
                    assertThat(user.getPassword()).isNotEqualTo("StrongPass123!").startsWith("$2");
                    assertThat(passwordEncoder.matches("StrongPass123!", user.getPassword())).isTrue();
                });
    }

That is exactly the sort of test I like for this kind of application. It verifies behavior from the outside, and then checks the database-backed repository to confirm the security property we care about: the cleartext password was not stored.

Wrap up

We now have a working Spring Boot 4 authorization server backed by Oracle Database.

Spring Security and Spring Authorization Server give us the authentication framework, filter chains, method-level authorization, password encoding, OAuth2/OIDC endpoints, JWK support, and token issuance. Oracle Database gives us a proper persistent user repository with schema ownership, constraints, audit fields, identity columns, and real integration testing through Testcontainers.

There are a few production topics that deserve their own treatment, especially persistent signing keys, production registered-client storage, wallet-based database connectivity, deployment configuration, and observability. But the core pattern is here: let Spring Security handle security, let Oracle Database handle the durable user store, and keep the boundary between them small and explicit.

The important Spring Boot 4 changes in this version are mostly at the edges: updated starter names, Spring Security 7 package/API moves, Spring Framework 7 response builder usage for empty responses, and Testcontainers 2 artifact names. The application design stays pleasantly boring, which is exactly what I want from a framework upgrade.

]]>
https://redstack.dev/2026/05/04/building-an-authorization-server-with-spring-boot-4-and-oracle-database/feed/ 0 4317
Building an Authorization Server with Spring Boot 3 and Oracle Database https://redstack.dev/2026/04/30/building-an-authorization-server-with-spring-boot-3-and-oracle-database/ https://redstack.dev/2026/04/30/building-an-authorization-server-with-spring-boot-3-and-oracle-database/#respond <![CDATA[Mark Nelson]]> Fri, 01 May 2026 00:22:09 +0000 <![CDATA[Uncategorized]]> <![CDATA[authentication]]> <![CDATA[authorization]]> <![CDATA[jwt]]> <![CDATA[oidc]]> <![CDATA[oracle]]> <![CDATA[Spring]]> <![CDATA[spring-security]]> https://redstack.dev/?p=4307 <![CDATA[Hi everyone! In this post I want to show you how to build a small authorization server using Spring Boot, Spring Security, Spring Authorization Server, and Oracle Database. The idea is simple: we want an application that can expose OAuth2/OIDC … Continue reading ]]> <![CDATA[

Hi everyone!

In this post I want to show you how to build a small authorization server using Spring Boot, Spring Security, Spring Authorization Server, and Oracle Database. The idea is simple: we want an application that can expose OAuth2/OIDC authorization-server endpoints, authenticate users whose details are stored in Oracle Database, and provide a small REST API for managing those users.

The complete code for this example is in the azn-server repository. In this article we will build it from scratch and look at the important pieces along the way.

One important note before we start: this version of the example is on the Spring Boot 3.x code line. The repository currently uses Spring Boot 3.5.x, Java 21, Spring Authorization Server through the Spring Boot starter, and the Oracle Spring Boot starters. A future post will cover the move to Spring Boot 4.x, including the associated new versions of Spring Framework and Spring Security.

What we are building

The application has three main responsibilities:

  • Expose OAuth2 and OpenID Connect endpoints using Spring Authorization Server.
  • Store application users in Oracle Database.
  • Provide a secured user-management API backed by Spring Security method security.

Spring Security gives us a lot here. It gives us the authentication framework, password encoding, UserDetailsService integration, filter chains, method-level authorization, role hierarchy support, and the authorization-server protocol endpoints. Oracle Database gives us a durable user repository, schema ownership, constraints, identity columns, auditing triggers, and a real database engine for integration tests.

This is the application shape:

  • Spring Boot starts the service.
  • Liquibase creates or updates the Oracle schema user.
  • Liquibase creates the USERS table and audit trigger.
  • JPA maps the USERS table into a User entity.
  • Spring Security loads users from that JPA repository.
  • Spring Authorization Server exposes the OAuth2/OIDC endpoints.
  • The REST API lets administrators manage users.

Let’s walk through the build.

Create the Spring Boot project

We start with a normal Spring Boot application. The important thing is to include the dependencies for web endpoints, Spring Authorization Server, actuator, JPA, Liquibase, Oracle UCP, Oracle wallet support, and the test stack.

Here is the dependency section from pom.xml:

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-oauth2-authorization-server</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.liquibase</groupId>
            <artifactId>liquibase-core</artifactId>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.spring</groupId>
            <artifactId>oracle-spring-boot-starter-ucp</artifactId>
            <version>${oracle-spring-boot-starter-version}</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.spring</groupId>
            <artifactId>oracle-spring-boot-starter-wallet</artifactId>
            <version>${oracle-spring-boot-starter-version}</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-prometheus</artifactId>
        </dependency>

        <!-- test dependencies -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-testcontainers</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>junit-jupiter</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>oracle-free</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

There are a couple of things to point out here.

First, spring-boot-starter-oauth2-authorization-server brings in the Spring Authorization Server support that provides the OAuth2/OIDC protocol endpoints. That means we do not have to hand-code token endpoints, metadata endpoints, JWK endpoints, or the protocol filter chain.

Second, the Oracle UCP starter gives us Oracle Universal Connection Pool integration through Spring Boot configuration. That is useful for real services because the database connection pool is not an afterthought – it is part of the application runtime.

Third, Liquibase owns the schema. Hibernate validates the schema, but Liquibase creates it. That is usually the right split for applications where the database is important enough to be managed deliberately.

Configure Oracle Database and Liquibase

The application uses two database identities:

  • A Liquibase/admin identity that can create and update the application schema.
  • A runtime schema user named USER_REPO that the application uses for normal database access.

Here is the application configuration:

server:
port: 8080
spring:
application:
name: @project.artifactId@
cloud:
# Discovery is opt-in so local runs and tests do not attempt to register.
discovery:
enabled: ${EUREKA_CLIENT_ENABLED:false}
threads:
virtual:
enabled: true
datasource:
# Runtime connections authenticate directly as the application schema user.
url: ${AZN_DATASOURCE_URL:${SPRING_DATASOURCE_URL:}}
username: ${AZN_USER_REPO_USERNAME:USER_REPO}
password: ${AZN_USER_REPO_PASSWORD}
driver-class-name: oracle.jdbc.OracleDriver
type: oracle.ucp.jdbc.PoolDataSource
oracleucp:
connection-factory-class-name: oracle.jdbc.pool.OracleDataSource
connection-pool-name: AznServerConnectionPool
initial-pool-size: 15
min-pool-size: 10
max-pool-size: 30
jpa:
# Keep database access inside service/controller methods, not view rendering.
open-in-view: false
hibernate:
# Liquibase owns schema changes; Hibernate only validates the result.
ddl-auto: validate
properties:
hibernate:
format_sql: true
show-sql: false
liquibase:
# Liquibase uses the admin account directly so it can create USER_REPO.
change-log: classpath:db/changelog/controller.yaml
url: ${AZN_DATASOURCE_URL:${SPRING_DATASOURCE_URL:}}
user: ${AZN_LIQUIBASE_USERNAME:${AZN_DATASOURCE_USERNAME:${SPRING_LIQUIBASE_USER:${SPRING_DATASOURCE_USERNAME:}}}}
password: ${AZN_LIQUIBASE_PASSWORD:${AZN_DATASOURCE_PASSWORD:${SPRING_LIQUIBASE_PASSWORD:${SPRING_DATASOURCE_PASSWORD:}}}}
parameters:
userRepoPassword: ${AZN_USER_REPO_PASSWORD}
enabled: ${RUN_LIQUIBASE:true}
azn:
bootstrap-users:
admin-password: ${ORACTL_ADMIN_PASSWORD:}
user-password: ${ORACTL_USER_PASSWORD:}
management:
endpoint:
health:
show-details: when_authorized
roles: ACTUATOR
endpoints:
web:
exposure:
# Keep actuator surface small; SecurityConfig protects non-health/info endpoints.
include: health,info,prometheus
eureka:
instance:
hostname: ${spring.application.name}
preferIpAddress: true
client:
# Supported for deployments, disabled by default for local/test startup.
service-url:
defaultZone: ${EUREKA_SERVER_ADDRESS:http://localhost:8761/eureka/}
fetch-registry: true
register-with-eureka: true
enabled: ${EUREKA_CLIENT_ENABLED:false}
# Logging
logging:
level:
org.springframework.web: INFO
org.springframework.security: INFO
oracle.obaas.aznserver: INFO

I like this arrangement because the runtime user is not the same as the schema-management user. Liquibase gets the elevated account it needs to create and manage USER_REPO, and the running application connects as USER_REPO. That is a clean security boundary.

Oracle UCP is configured as the datasource type:

    type: oracle.ucp.jdbc.PoolDataSource
    oracleucp:
      connection-factory-class-name: oracle.jdbc.pool.OracleDataSource
      connection-pool-name: AznServerConnectionPool
      initial-pool-size: 15
      min-pool-size: 10
      max-pool-size: 30

That gives us Oracle-aware connection pooling with very little Spring code. We get the operational benefit of a pool that is meant for Oracle Database, while still configuring it in the usual Spring Boot way.

Create the schema with Liquibase

The changelog controller is small:

---
databaseChangeLog:
- include:
file: classpath:db/changelog/dbuser.sql
- include:
file: classpath:db/changelog/table.sql
- include:
file: classpath:db/changelog/trigger.sql

The first changelog creates and maintains the USER_REPO database user:

-- liquibase formatted sql
-- changeset az_admin:initial_user endDelimiter:/ runAlways:true runOnChange:true
DECLARE
l_user VARCHAR2(255);
l_tblspace VARCHAR2(255);
BEGIN
BEGIN
SELECT username INTO l_user FROM DBA_USERS WHERE USERNAME='USER_REPO';
EXCEPTION WHEN no_data_found THEN
EXECUTE IMMEDIATE 'CREATE USER "USER_REPO" IDENTIFIED BY "${userRepoPassword}"';
END;
EXECUTE IMMEDIATE 'ALTER USER "USER_REPO" IDENTIFIED BY "${userRepoPassword}" ACCOUNT UNLOCK';
SELECT default_tablespace INTO l_tblspace FROM dba_users WHERE username = 'USER_REPO';
EXECUTE IMMEDIATE 'ALTER USER "USER_REPO" QUOTA UNLIMITED ON ' || l_tblspace;
EXECUTE IMMEDIATE 'GRANT CONNECT TO "USER_REPO"';
EXECUTE IMMEDIATE 'GRANT RESOURCE TO "USER_REPO"';
EXECUTE IMMEDIATE 'ALTER USER "USER_REPO" DEFAULT ROLE CONNECT,RESOURCE';
END;
/
--rollback drop user "USER_REPO" cascade;

The next changelog creates the user table:

-- liquibase formatted sql
-- changeset az_admin:initial_table
CREATE TABLE USER_REPO.USERS
(
USER_ID NUMBER GENERATED ALWAYS AS IDENTITY (START WITH 1 CACHE 20),
PASSWORD VARCHAR2(255 CHAR) NOT NULL,
ROLES VARCHAR2(255 CHAR) NOT NULL,
USERNAME VARCHAR2(255 CHAR) NOT NULL,
CREATED_ON TIMESTAMP DEFAULT SYSDATE,
CREATED_BY VARCHAR2 (100) DEFAULT COALESCE(
REGEXP_SUBSTR(SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER'),'^[^:]*'),
SYS_CONTEXT('USERENV','SESSION_USER')),
UPDATED_ON TIMESTAMP ,
UPDATED_BY VARCHAR2 (255),
PRIMARY KEY (USER_ID),
CONSTRAINT USERNAME_UQ UNIQUE (USERNAME)
) LOGGING;
COMMENT ON TABLE USER_REPO.USERS is 'Application user repository for OAuth2/OIDC user management';
COMMENT ON COLUMN USER_REPO.USERS.PASSWORD is 'BCrypt hash of the application user password; never store cleartext';
ALTER TABLE USER_REPO.USERS ADD EMAIL VARCHAR2(255 CHAR) NULL;
ALTER TABLE USER_REPO.USERS ADD OTP VARCHAR2(255 CHAR) NULL;
COMMENT ON COLUMN USER_REPO.USERS.OTP is 'BCrypt hash of the one-time password; never store cleartext';
--rollback DROP TABLE USER_REPO.USERS;

There are some good Oracle Database features doing useful work here:

  • GENERATED ALWAYS AS IDENTITY gives us database-managed user ids.
  • The unique constraint protects usernames at the database level.
  • Column comments document sensitive columns right where they live.
  • The table belongs to the USER_REPO schema, not to the application admin user.

Finally, we add a small audit trigger:

-- liquibase formatted sql
-- changeset az_admin:initial_trigger endDelimiter:/
CREATE OR REPLACE EDITIONABLE TRIGGER "USER_REPO"."AUDIT_TRG" BEFORE
UPDATE ON USER_REPO.USERS FOR EACH ROW
BEGIN
:NEW.UPDATED_ON := SYSDATE;
:NEW.UPDATED_BY := COALESCE(REGEXP_SUBSTR(SYS_CONTEXT('USERENV', 'CLIENT_IDENTIFIER'), '^[^:]*'), SYS_CONTEXT('USERENV', 'SESSION_USER'));
END;
/
--rollback DROP TRIGGER "USER_REPO"."AUDIT_TRG";

This is a nice example of letting the database enforce something that belongs in the database. Every update gets audit fields set consistently, whether the update came from this Spring application or from another controlled path later.

Map the Oracle table to a JPA entity

Now we need a JPA entity for the USER_REPO.USERS table.

// Copyright (c) 2023, 2026, Oracle and/or its affiliates.
package oracle.obaas.aznserver.model;
import com.fasterxml.jackson.annotation.JsonProperty;
import jakarta.persistence.Column;
import jakarta.persistence.Entity;
import jakarta.persistence.GeneratedValue;
import jakarta.persistence.GenerationType;
import jakarta.persistence.Id;
import jakarta.persistence.Table;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
@Entity
@Table(name = "users", schema = "user_repo")
@Data
@AllArgsConstructor
@NoArgsConstructor
@ToString(exclude = {"password", "otp"})
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "USER_ID")
private Long userId;
@Column(name = "USERNAME", nullable = false)
private String username;
/**
* Stores the BCrypt hash that is persisted in USER_REPO.USERS.PASSWORD.
* Cleartext passwords may be accepted at API boundaries, but they must be
* encoded before this entity is saved.
*/
@JsonProperty(access = JsonProperty.Access.WRITE_ONLY)
@Column(name = "PASSWORD", nullable = false, length = 255)
private String password;
@Column(name = "ROLES", nullable = false)
private String roles;
@Column(name = "EMAIL")
private String email;
@JsonProperty(access = JsonProperty.Access.WRITE_ONLY)
@Column(name = "OTP")
private String otp;
/**
* Create a user object.
*
* @param username The username.
* @param password The encoded password hash for persistence.
* @param roles The roles assigned the user, as a comma separated list, e.g.
* "ROLE_USER,ROLE_ADMIN".
*/
public User(String username, String password, String roles) {
this.username = username;
this.password = password;
this.roles = roles;
}
// This constructor should only be used during testing with a mock repository,
// when we need to set the id manually
public User(long userId, String username, String password, String roles) {
this(username, password, roles);
this.userId = userId;
}
/**
* Create a user object.
*
* @param username The username.
* @param password The encoded password hash for persistence.
* @param roles The roles assigned the user, as a comma separated list, e.g.
* "ROLE_USER,ROLE_ADMIN".
* @param email The email associated with user account.
*/
public User(String username, String password, String roles, String email) {
this(username, password, roles);
this.email = email;
}
}

There are two small but important security choices in this class.

First, password and otp are write-only for JSON serialization. That means the API can accept these values in request bodies, but it will not serialize them back into responses.

Second, Lombok’s @ToString excludes password and otp. That helps prevent secrets from being accidentally written into logs.

The repository is exactly what we want from Spring Data JPA: small, declarative, and focused on the queries the service needs.

// Copyright (c) 2022, 2023, Oracle and/or its affiliates.
package oracle.obaas.aznserver.repository;
import java.util.List;
import java.util.Optional;
import oracle.obaas.aznserver.model.User;
import org.springframework.data.jpa.repository.JpaRepository;
public interface UserRepository extends JpaRepository<User, Long> {
Optional<User> findByUsername(String username);
Optional<User> findByUsernameIgnoreCase(String username);
Optional<User> findByUserId(Long userId);
List<User> findUsersByUsernameStartsWithIgnoreCase(String username);
Optional<User> findByEmailIgnoreCase(String email);
}

This is one of the places where Spring Data JPA shines. The method names communicate intent, Spring implements the queries, and the application code stays readable.

Adapt the database user to Spring Security

Spring Security authenticates with UserDetails. Our database user is a domain object, so we wrap it in a SecurityUser.

// Copyright (c) 2022, 2026, Oracle and/or its affiliates.
package oracle.obaas.aznserver.model;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import org.apache.commons.lang3.StringUtils;
import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.authority.SimpleGrantedAuthority;
import org.springframework.security.core.userdetails.UserDetails;
public class SecurityUser implements UserDetails {
private final User user;
public SecurityUser(User user) {
this.user = user;
}
@Override
public String getUsername() {
return user.getUsername();
}
@Override
public String getPassword() {
return user.getPassword();
}
@Override
public Collection<? extends GrantedAuthority> getAuthorities() {
if (StringUtils.isBlank(user.getRoles())) {
return List.of();
}
return Arrays.stream(user
.getRoles()
.split(","))
.map(SimpleGrantedAuthority::new)
.toList();
}
@Override
public boolean isAccountNonExpired() {
return true;
}
@Override
public boolean isAccountNonLocked() {
return true;
}
@Override
public boolean isCredentialsNonExpired() {
return true;
}
@Override
public boolean isEnabled() {
return true;
}
}

Then we create a UserDetailsService backed by the JPA repository:

// Copyright (c) 2022, 2026, Oracle and/or its affiliates.
package oracle.obaas.aznserver.service;
import oracle.obaas.aznserver.model.SecurityUser;
import oracle.obaas.aznserver.repository.UserRepository;
import org.springframework.security.core.userdetails.UserDetails;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.core.userdetails.UsernameNotFoundException;
import org.springframework.stereotype.Service;
@Service
public class JpaUserDetailsService implements UserDetailsService {
private final UserRepository userRepository;
public JpaUserDetailsService(UserRepository userRepository) {
this.userRepository = userRepository;
}
@Override
public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
SecurityUser user = userRepository
.findByUsername(username)
.map(SecurityUser::new)
.orElseThrow(() -> new UsernameNotFoundException("Authentication failed"));
return user;
}
}

This is the bridge between Oracle Database and Spring Security. Once this service exists, Spring Security can authenticate users stored in USER_REPO.USERS.

Configure Spring Security and Spring Authorization Server

The security configuration is the heart of the application. It does several things:

  • Creates a role hierarchy.
  • Enables method security.
  • Creates a dedicated authorization-server filter chain.
  • Creates a separate actuator filter chain.
  • Creates a stateless API filter chain.
  • Provides password encoding.
  • Provides development/test signing keys.
  • Optionally creates a local OAuth client.

Here is the role hierarchy:

    public static final String ROLE_HIERARCHY = "ROLE_ADMIN > ROLE_USERn"
            + "ROLE_ADMIN > ROLE_CONFIG_EDITORn"
            + "ROLE_CONFIG_EDITOR > ROLE_USER";

    /**
     * Configure a role hierarchy such that ADMIN "includes"/implies USER.
     * 
     * @return the hierarchy.
     */

    @Bean
    public RoleHierarchy roleHierarchy() {
        return RoleHierarchyImpl.fromHierarchy(ROLE_HIERARCHY);
    }

    /**
     * Configure method security to use the role hierarchy.
     * 
     * @param roleHierarchy injected by Spring.
     * @return The MethodSecurityExpressionHandler.
     */
    @Bean
    public MethodSecurityExpressionHandler methodSecurityExpressionHandler(RoleHierarchy roleHierarchy) {
        DefaultMethodSecurityExpressionHandler expressionHandler = new DefaultMethodSecurityExpressionHandler();
        expressionHandler.setRoleHierarchy(roleHierarchy);
        return expressionHandler;
    }

Role hierarchy is one of those Spring Security features that is easy to miss but very useful. If an administrator should also be treated as a user, we do not have to duplicate every role check everywhere. We can teach Spring Security that ROLE_ADMIN includes ROLE_USER.

The authorization server gets its own filter chain:

    /**
     * Authorization Server endpoints use their own filter chain so OAuth protocol
     * handling does not inherit API-specific stateless settings.
     *
     * @param http HttpSecurity injected by Spring.
     * @return the SecurityFilterChain.
     * @throws Exception if unable to create the chain.
     */
    @Bean
    @Order(1)
    public SecurityFilterChain authorizationServerSecurityFilterChain(HttpSecurity http)
            throws Exception {
        log.debug("In authorizationServerSecurityFilterChain");
        OAuth2AuthorizationServerConfigurer authorizationServerConfigurer =
                OAuth2AuthorizationServerConfigurer.authorizationServer();

        http
            .securityMatcher(authorizationServerConfigurer.getEndpointsMatcher())
            .with(authorizationServerConfigurer, authorizationServer ->
                authorizationServer.oidc(Customizer.withDefaults()))
            .authorizeHttpRequests((authorize) -> authorize
                .requestMatchers("/.well-known/**", "/oauth2/jwks").permitAll()
                .anyRequest().authenticated())
            .csrf((csrf) -> csrf.ignoringRequestMatchers(authorizationServerConfigurer.getEndpointsMatcher()))
            .exceptionHandling((exceptions) -> exceptions.defaultAuthenticationEntryPointFor(
                new LoginUrlAuthenticationEntryPoint("/login"),
                new MediaTypeRequestMatcher(MediaType.TEXT_HTML)));
        return http.build();
    }

This is where Spring Authorization Server does a lot of heavy lifting. The endpoints matcher identifies the protocol endpoints, OIDC support is enabled, and the well-known metadata and JWK endpoints are allowed anonymously.

The user-management API has a different shape. It is stateless and uses HTTP Basic:

    /**
     * Create a SecurityFilterChain for the user-management API.
     * @param http HttpSecurity injected by Spring. 
     * @param userDetailsService the JPA-backed user details service.
     * @return the SecurityFilterChain.
     * @throws Exception if unable to create the chain.
     */
    @Bean
    @Order(3)
    public SecurityFilterChain apiSecurityFilterChain(HttpSecurity http, UserDetailsService userDetailsService)
            throws Exception {
        log.debug("In apiSecurityFilterChain");
        http
            .securityMatcher("/user/api/**", "/error/**")
            .authorizeHttpRequests((authorize) -> authorize
                .requestMatchers("/error/**").permitAll()
                .requestMatchers("/user/api/v1/ping").permitAll()
                .requestMatchers("/user/api/v1/forgot").permitAll()
                .anyRequest().authenticated()
            )
            .sessionManagement(session ->
                session.sessionCreationPolicy(SessionCreationPolicy.STATELESS))
            .httpBasic(Customizer.withDefaults())
            .userDetailsService(userDetailsService);
        // The user-management API is stateless and does not use browser sessions or cookies.
        http.csrf(csrf -> csrf.disable());
        return http.build();
    }

The separation between the authorization-server chain and the API chain matters. The OAuth2/OIDC endpoints are protocol endpoints. The user API is a REST API. They have different security needs, so they get different chains.

The authentication provider uses our JPA-backed user details service and a BCrypt password encoder:

    /**
     * Create an Authentication Provider for our UserDetailsService.
     * @param userDetailsService the JPA-backed user details service.
     * @param passwordEncoder password encoder for stored password hashes.
     * @return the AuthenticationProvider.
     */
    @Bean
    public DaoAuthenticationProvider authenticationProvider(UserDetailsService userDetailsService,
            PasswordEncoder passwordEncoder) {
        DaoAuthenticationProvider auth = new DaoAuthenticationProvider(userDetailsService);
        auth.setPasswordEncoder(passwordEncoder);
        return auth;
    }

And the password encoder is:

    @Bean
    public PasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder();
    }

Passwords in the API enter as clear text at the boundary, but they are stored as BCrypt hashes in Oracle Database. That is exactly the line we want: clear text only at the edge, hashes at rest.

For local development and tests, the application can create an opt-in registered client:

    /**
     * Create an opt-in local client for test and developer-only contexts.
     *
     * Production deployments should configure registered clients explicitly using
     * Spring Boot's authorization-server client properties.
     *
     * @param passwordEncoder password encoder for the client secret.
     * @param clientSecret configured client secret.
     * @return a local RegisteredClientRepository.
     */
    @Bean
    @ConditionalOnMissingBean
    @ConditionalOnProperty(prefix = "azn.authorization-server.default-client", name = "enabled",
            havingValue = "true")
    public RegisteredClientRepository localRegisteredClientRepository(PasswordEncoder passwordEncoder,
            @Value("${azn.authorization-server.default-client.secret:}") String clientSecret) {
        if (!StringUtils.hasText(clientSecret)) {
            throw new IllegalStateException("azn.authorization-server.default-client.secret must be set when "
                    + "azn.authorization-server.default-client.enabled=true");
        }
        RegisteredClient registeredClient = RegisteredClient.withId(UUID.randomUUID().toString())
                .clientId("azn-local-client")
                .clientSecret(passwordEncoder.encode(clientSecret))
                .clientAuthenticationMethod(ClientAuthenticationMethod.CLIENT_SECRET_BASIC)
                .authorizationGrantType(AuthorizationGrantType.CLIENT_CREDENTIALS)
                .authorizationGrantType(AuthorizationGrantType.AUTHORIZATION_CODE)
                .authorizationGrantType(AuthorizationGrantType.REFRESH_TOKEN)
                .redirectUri("http://127.0.0.1:8080/login/oauth2/code/azn-local-client")
                .scope(OidcScopes.OPENID)
                .scope("user.read")
                .clientSettings(ClientSettings.builder().requireProofKey(true).build())
                .build();
        return new InMemoryRegisteredClientRepository(registeredClient);
    }

Notice that this is opt-in. That is intentional. Local convenience is useful, but production registered clients should be configured deliberately.

The JWK source is also local by default:

    /**
     * Provide process-local signing keys for development and tests.
     *
     * Production deployments should replace this bean with persistent key material
     * so tokens remain verifiable across restarts and rolling deploys.
     *
     * @return the JWK source.
     */
    @Bean
    @ConditionalOnMissingBean
    public JWKSource<SecurityContext> jwkSource() {
        log.warn("Using process-local generated RSA signing keys. Configure a persistent JWKSource bean for "
                + "production so issued tokens remain verifiable across restarts and rolling deploys.");
        RSAKey rsaKey = generateRsa();
        JWKSet jwkSet = new JWKSet(rsaKey);
        return (jwkSelector, securityContext) -> jwkSelector.select(jwkSet);
    }

That is fine for development and tests. For production, you would provide persistent signing key material so tokens remain verifiable across restarts and rolling deployments.

Bootstrap the first users

An authorization server needs at least one user to get started. This application creates three bootstrap users on startup by default:

  • obaas-admin
  • obaas-user
  • obaas-config

The initializer is in the main application class:

    @Bean
    @ConditionalOnProperty(prefix = "azn.bootstrap-users", name = "enabled", havingValue = "true",
            matchIfMissing = true)
    ApplicationRunner userStoreInitializer(UserRepository users, PasswordEncoder passwordEncoder,
            @Value("${azn.bootstrap-users.admin-password:}") String adminPassword,
            @Value("${azn.bootstrap-users.user-password:}") String userPassword) {
        return args -> initUserStore(users, passwordEncoder, adminPassword, userPassword);
    }

And the implementation creates missing users with BCrypt-encoded passwords:

    public static void initUserStore(UserRepository users, PasswordEncoder encoder,
            String adminPassword, String userPassword) {
        log.debug("ENTER initUserStore");

        String obaasAdminPwd = adminPassword;
        String obaasUserPwd = userPassword;
        String obaasConfigPwd = obaasUserPwd;

        // Check for obaas-user, if not existent create the user
        if (users.findByUsername(OBAAS_USER).isEmpty()) {
            log.debug("Creating user obaas-user");

            obaasUserPwd = bootstrapPassword("azn.bootstrap-users.user-password", obaasUserPwd);

            users.saveAndFlush(new User(OBAAS_USER, encoder.encode(obaasUserPwd),
                    "ROLE_USER"));
        }

        // Check for obaas-admin, if not existent create the user
        Optional<User> adminUser = users.findByUsername(OBAAS_ADMIN);
        if (adminUser.isEmpty()) {
            log.debug("Creating user obaas-admin");

            obaasAdminPwd = bootstrapPassword("azn.bootstrap-users.admin-password", obaasAdminPwd);

            users.saveAndFlush(new User(OBAAS_ADMIN, encoder.encode(obaasAdminPwd),
                    "ROLE_ADMIN,ROLE_CONFIG_EDITOR,ROLE_USER"));
        }

        // Check for obaas-config, if not existent create the user with the same pwd as
        // obaas-user
        if (users.findByUsernameIgnoreCase(OBAAS_CONFIG).isEmpty()) {
            log.debug("Creating user obaas-config");

            obaasConfigPwd = bootstrapPassword("azn.bootstrap-users.user-password", obaasConfigPwd);

            users.saveAndFlush(new User(OBAAS_CONFIG, encoder.encode(obaasConfigPwd),
                    "ROLE_CONFIG_EDITOR,ROLE_USER"));
        }
    }

The bootstrap passwords come from external configuration. If bootstrap users are enabled and the password properties are missing, startup fails:

    private static String bootstrapPassword(String propertyName, String configuredPassword) {
        if (StringUtils.isNotBlank(configuredPassword)) {
            return configuredPassword;
        }
        throw new IllegalStateException(propertyName + " must be set when azn.bootstrap-users.enabled=true");
    }

That is much better than quietly creating default passwords.

Build the user-management API

The API is a normal Spring REST controller:

@RestController
@RequestMapping("/user/api/v1")
@Slf4j
public class DbUserRepoController {
public static final String ROLE_ADMIN = "ADMIN";
private static final String isAdminUser = "hasRole('ADMIN')";
private static final String isUser = "hasRole('USER')";
private static final Pattern PASSWORD_PATTERN =
Pattern.compile("^(?=.*[?!$%^*\-_])(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z]).{12,}$");
final UserRepository userRepository;
final PasswordEncoder passwordEncoder;
public DbUserRepoController(UserRepository userRepository, PasswordEncoder passwordEncoder) {
this.userRepository = userRepository;
this.passwordEncoder = passwordEncoder;
}

The connect endpoint is a simple authenticated check:

    @PreAuthorize("hasAnyRole('ADMIN','USER','CONFIG_EDITOR')")
    @GetMapping("/connect")
    public ResponseEntity<String> connect() {
        Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
        UserDetails userDetails = (UserDetails) authentication.getPrincipal();
        String authorities = userDetails.getAuthorities().toString();

        log.debug("/connect Username: {}", authentication.getName());
        log.debug("/connect Authorities: {}", userDetails.getAuthorities());
        log.debug("/connect Details: {}", authentication.getDetails());

        return new ResponseEntity<>(authorities, HttpStatus.OK);
    }

Creating a user is restricted to admins:

    @PreAuthorize(isAdminUser)
    @PostMapping("/createUser")
    public ResponseEntity<?> createUser(@RequestBody User user) {

        // If user exists return HTTP Status 409.
        Optional<User> checkUser = userRepository.findByUsernameIgnoreCase(user.getUsername());
        if (checkUser.isPresent()) {
            log.debug("User exists");
            return new ResponseEntity<>("User already exists", HttpStatus.CONFLICT);
        }

        if (!isValidPassword(user.getPassword())) {
            return new ResponseEntity<>("Password does not meet complexity requirements",
                    HttpStatus.UNPROCESSABLE_ENTITY);
        }

        if (StringUtils.isNotEmpty(user.getEmail())) {
            Optional<User> userAlreadyAssociatedWithEMail = userRepository.findByEmailIgnoreCase(user.getEmail());
            if (userAlreadyAssociatedWithEMail.isPresent()) {
                log.debug("User exists");
                return new ResponseEntity<>("Another user exists with same email", HttpStatus.CONFLICT);
            }
        }

        // Validate roles in RequestBody
        boolean hasValidRole = validateRole(user);
        log.debug("Valid role: {}", hasValidRole);

        // If Valid role create the user else send HTTP 422
        if (hasValidRole) {
            try {
                User users = userRepository.save(new User(
                        user.getUsername(),
                        passwordEncoder.encode(user.getPassword()),
                        user.getRoles(), user.getEmail()));
                return new ResponseEntity<>(users, HttpStatus.CREATED);
            } catch (Exception e) {
                return new ResponseEntity<>(null, HttpStatus.INTERNAL_SERVER_ERROR);
            }
        } else {
            return new ResponseEntity<>(null, HttpStatus.UNPROCESSABLE_ENTITY);
        }
    }

There are a few important details in here:

  • Usernames are checked case-insensitively.
  • Duplicate emails are rejected.
  • Password complexity is enforced before saving.
  • Roles are validated against the enum.
  • Passwords are encoded before the entity is persisted.

The password validation is intentionally small and explicit:

    private boolean isValidPassword(String password) {
        return StringUtils.isNotBlank(password) && PASSWORD_PATTERN.matcher(password).matches();
    }

The role validation uses the enum:

    private boolean validateRole(User user) {
        try {
            if (StringUtils.isBlank(user.getRoles())) {
                return false;
            }
            Arrays.stream(user.getRoles().toUpperCase()
                    .replace("[", "")
                    .replace("]", "")
                    .replace(" ", "")
                    .split(","))
                    .map(UserRoles::valueOf)
                    .toList();
            return true;
        } catch (IllegalArgumentException illegalArgumentException) {
            return false;
        }
    }

Password changes are available to admins or to the user changing their own password:

    @PreAuthorize(isUser)
    @PutMapping("/updatePassword")
    public ResponseEntity<User> changePassword(@RequestBody UserInfoDto userInfo) {

        if (!isValidPassword(userInfo.password())) {
            return new ResponseEntity<>(null, HttpStatus.UNPROCESSABLE_ENTITY);
        }

        // Check if the user is a user with ADMIN
        SecurityContext securityContext = SecurityContextHolder.getContext();
        boolean isAdminUser = false;

        for (GrantedAuthority role : securityContext.getAuthentication().getAuthorities()) {
            if (role.getAuthority().contains(ROLE_ADMIN)) {
                isAdminUser = true;
            }
        }

        // TODO: Must update the correspondent secret??

        // If the username of the authenticated user matches the requestbody username,
        // or if it is a user with ROLE_ADMIN
        if ((userInfo.username().compareTo(securityContext.getAuthentication().getName()) == 0) || isAdminUser) {
            try {
                Optional<User> user = userRepository.findByUsername(userInfo.username());
                if (user.isPresent()) {
                    user.get().setPassword(passwordEncoder.encode(userInfo.password()));
                    userRepository.saveAndFlush(user.get());
                    return new ResponseEntity<>(null, HttpStatus.OK);
                } else {
                    return new ResponseEntity<>(null, HttpStatus.NO_CONTENT);
                }
            } catch (Exception e) {
                return new ResponseEntity<>(null, HttpStatus.INTERNAL_SERVER_ERROR);
            }
        } else {
            return new ResponseEntity<>(null, HttpStatus.FORBIDDEN);
        }
    }

And the forgot-password flow keeps OTP values hashed too:

    @PostMapping("/forgot")
    public ResponseEntity<UserInfoDto> createOTP(@RequestBody(required = true) User inUser) {
        if (StringUtils.isNotEmpty(inUser.getUsername()) && StringUtils.isNotEmpty(inUser.getOtp())) {
            try {
                Optional<User> user = userRepository.findByUsernameIgnoreCase(inUser.getUsername());
                if (user.isEmpty()) {
                    return new ResponseEntity<>(HttpStatus.NO_CONTENT);
                }
                user.get().setOtp(passwordEncoder.encode(inUser.getOtp()));
                userRepository.saveAndFlush(user.get());
                return new ResponseEntity<>(null,
                        HttpStatus.OK);
            } catch (Exception e) {
                return new ResponseEntity<>(null, HttpStatus.INTERNAL_SERVER_ERROR);
            }

        } else {
            return new ResponseEntity<>(null, HttpStatus.UNPROCESSABLE_ENTITY);
        }
    }

The reset endpoint compares the provided OTP against the stored BCrypt hash:

    @PutMapping("/forgot")
    public ResponseEntity<?> reset(@RequestBody(required = true) User inUser) {
        if (StringUtils.isNotEmpty(inUser.getUsername()) && StringUtils.isNotEmpty(inUser.getOtp())
                && StringUtils.isNotEmpty(inUser.getPassword())) {
            if (!isValidPassword(inUser.getPassword())) {
                return new ResponseEntity<>("Password does not meet complexity requirements",
                        HttpStatus.UNPROCESSABLE_ENTITY);
            }
            try {
                Optional<User> user = userRepository.findByUsernameIgnoreCase(inUser.getUsername());
                if (user.isEmpty()) {
                    return new ResponseEntity<>("User does not exist", HttpStatus.NO_CONTENT);
                }

                if (StringUtils.isEmpty(user.get().getOtp())) {
                    return new ResponseEntity<>("OTP not  generated.", HttpStatus.CONFLICT);
                }

                if (StringUtils.isEmpty(user.get().getPassword())) {
                    return new ResponseEntity<>("Password not  provided.", HttpStatus.CONFLICT);
                }

                if (!passwordEncoder.matches(inUser.getOtp(), user.get().getOtp())) {
                    return new ResponseEntity<>("OTP does not match.", HttpStatus.CONFLICT);
                }

                if (passwordEncoder.matches(inUser.getPassword(), user.get().getPassword())) {
                    return new ResponseEntity<>("Password can not be same as previous.", HttpStatus.CONFLICT);
                }

                user.get().setOtp(null);
                user.get().setPassword(passwordEncoder.encode(inUser.getPassword()));
                userRepository.saveAndFlush(user.get());

                return new ResponseEntity<>("Password successfully changed.",
                        HttpStatus.OK);
            } catch (Exception e) {
                return new ResponseEntity<>(null, HttpStatus.INTERNAL_SERVER_ERROR);
            }

        } else {
            return new ResponseEntity<>(null, HttpStatus.UNPROCESSABLE_ENTITY);
        }
    }

Again, the pattern is the same: accept secret values at the boundary, compare or encode them through Spring Security’s PasswordEncoder, and do not disclose them in responses.

Run Locally

Now let’s run the finished application. This section follows the Run Locally flow from the repository README, because it is the quickest way to prove that Oracle Database is available, Liquibase can create the schema, and the bootstrap users can be created.

Start with an Oracle database that the Liquibase admin user can connect to. For a disposable local Oracle database, you can use the same image family as the integration tests:

docker run --name azn-oracle --rm -p 1521:1521
-e ORACLE_PASSWORD='LocalSystem123!'
gvenzl/oracle-free:23.26.1-slim-faststart

In another terminal, configure the app. The USER_REPO password is the Oracle schema password that Liquibase assigns to the runtime database user. The bootstrap passwords are application user passwords and will be stored as BCrypt hashes in USER_REPO.USERS.

export AZN_DATASOURCE_URL='jdbc:oracle:thin:@//localhost:1521/FREEPDB1'
export AZN_LIQUIBASE_USERNAME='SYSTEM'
export AZN_LIQUIBASE_PASSWORD='LocalSystem123!'
export AZN_USER_REPO_USERNAME='USER_REPO'
export AZN_USER_REPO_PASSWORD='LocalUserRepo123!'
export ORACTL_ADMIN_PASSWORD='LocalAdmin123!'
export ORACTL_USER_PASSWORD='LocalUser123!'
export AZN_AUTHORIZATION_SERVER_DEFAULT_CLIENT_ENABLED=true
export AZN_AUTHORIZATION_SERVER_DEFAULT_CLIENT_SECRET='LocalClient123!'

Run the application:

mvn spring-boot:run

The app listens on http://localhost:8080.

Smoke Test API

Now we can walk through the finished code using the Smoke Test API flow from the README. These calls verify that the app is running, the Authorization Server endpoints are exposed, users can authenticate, and an admin can create and update users.

Before we call the API, set up the shell variables used by the walkthrough:

export BASE_URL='http://localhost:8080'
export ADMIN_USER='obaas-admin'
export ADMIN_PASSWORD='LocalAdmin123!'
export TEST_USER='readme-user'
export TEST_PASSWORD='ReadmeUser123!'
export TEST_PASSWORD_2='ReadmeUser456!'
export TEST_EMAIL='[email protected]'

First, check the anonymous endpoints:

curl -i "$BASE_URL/actuator/health"
curl -i "$BASE_URL/user/api/v1/ping"
curl -i "$BASE_URL/.well-known/oauth-authorization-server"
curl -i "$BASE_URL/oauth2/jwks"

This verifies the basic shape of the running service. Actuator health is available, the unauthenticated ping endpoint works, the authorization server metadata is published, and the JWK endpoint is available for token verification.

Now authenticate with the bootstrap admin user:

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD" "$BASE_URL/user/api/v1/connect"
curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD" "$BASE_URL/user/api/v1/pingadmin"

The admin user was inserted during startup by the bootstrap initializer. The password came from configuration and was stored in Oracle Database as a BCrypt hash.

If you are rerunning this sequence, remove the sample user first:

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
-X DELETE
"$BASE_URL/user/api/v1/deleteUsername?username=$TEST_USER"

Create a user. Passwords must be at least 12 characters and include uppercase, lowercase, a number, and one of ?!$%^*-_.

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","password":"$TEST_PASSWORD","roles":"ROLE_USER","email":"$TEST_EMAIL"}"
"$BASE_URL/user/api/v1/createUser"

This call exercises several things at once. Spring Security authorizes the admin request, the controller validates the role and password, the password is encoded with BCrypt, and JPA stores the user in USER_REPO.USERS.

Verify that the new user can authenticate and use a user endpoint:

curl -i -u "$TEST_USER:$TEST_PASSWORD" "$BASE_URL/user/api/v1/connect"
curl -i -u "$TEST_USER:$TEST_PASSWORD" "$BASE_URL/user/api/v1/pinguser"

Now find the user as an admin. Password and OTP fields are write-only and should not appear in the response body.

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
"$BASE_URL/user/api/v1/findUser?username=$TEST_USER"

That response is a useful security check. The API can accept sensitive values, but it should not echo them back.

Next, change the user’s role, then authenticate with the same user against the config-editor endpoint:

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
-X PUT
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","roles":"ROLE_CONFIG_EDITOR"}"
"$BASE_URL/user/api/v1/changeRole"
curl -i -u "$TEST_USER:$TEST_PASSWORD" "$BASE_URL/user/api/v1/pingceditor"

That shows the method-security path working. The role stored in Oracle Database changes, Spring Security reads it through the JPA-backed UserDetailsService, and the endpoint authorization follows the updated authorities.

Change the user’s email:

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
-X PUT
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","email":"updated-$TEST_EMAIL"}"
"$BASE_URL/user/api/v1/changeEmail"

Change the user’s password as the user, then authenticate with the new password:

curl -i -u "$TEST_USER:$TEST_PASSWORD"
-X PUT
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","password":"$TEST_PASSWORD_2"}"
"$BASE_URL/user/api/v1/updatePassword"
curl -i -u "$TEST_USER:$TEST_PASSWORD_2" "$BASE_URL/user/api/v1/connect"

Again, the cleartext password only crosses the API boundary. The value stored in Oracle Database is a BCrypt hash.

Exercise the forgot-password flow. The OTP is accepted in the request but is stored as a BCrypt hash and is not disclosed by the lookup endpoint.

curl -i
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","otp":"123456"}"
"$BASE_URL/user/api/v1/forgot"
curl -i "$BASE_URL/user/api/v1/forgot?username=$TEST_USER"
curl -i
-X PUT
-H 'Content-Type: application/json'
-d "{"username":"$TEST_USER","otp":"123456","password":"ReadmeReset123!"}"
"$BASE_URL/user/api/v1/forgot"

Finally, verify the opt-in local OAuth client with the client credentials flow:

curl -i -u 'azn-local-client:LocalClient123!'
-d 'grant_type=client_credentials'
-d 'scope=user.read'
"$BASE_URL/oauth2/token"

That call hits the Spring Authorization Server token endpoint and should return a bearer access token.

When you are finished, you can clean up the demo user:

curl -i -u "$ADMIN_USER:$ADMIN_PASSWORD"
-X DELETE
"$BASE_URL/user/api/v1/deleteUsername?username=$TEST_USER"

The most useful local endpoints are:

  • GET /actuator/health
  • GET /.well-known/oauth-authorization-server
  • GET /oauth2/jwks
  • GET /user/api/v1/ping

At this point we have an Oracle-backed user repository, Spring Security authentication against that repository, a working user-management API, and Spring Authorization Server issuing tokens.

Test against a real Oracle Database

The integration tests use Testcontainers with Oracle Free:

@Testcontainers
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
abstract class OracleIntegrationTestSupport {
private static final DockerImageName ORACLE_IMAGE =
DockerImageName.parse("gvenzl/oracle-free:23.26.1-slim-faststart");
private static final AtomicInteger POOL_SEQUENCE = new AtomicInteger();
private static final SecureRandom PASSWORD_RANDOM = new SecureRandom();
static final String BOOTSTRAP_PASSWORD = generatedPassword();
static final String USER_REPO_PASSWORD = generatedPassword();
private static final String ORACLE_PASSWORD = generatedPassword();
@Container
static final OracleContainer ORACLE = new OracleContainer(ORACLE_IMAGE)
.withPassword(ORACLE_PASSWORD);

The test support wires Spring Boot to the container:

    static void configureOracleProperties(DynamicPropertyRegistry registry) {
        String poolName = "AznServerOracleIT-" + POOL_SEQUENCE.incrementAndGet();

        registry.add("spring.datasource.url", ORACLE::getJdbcUrl);
        registry.add("spring.datasource.username", () -> "USER_REPO");
        registry.add("spring.datasource.password", () -> USER_REPO_PASSWORD);
        registry.add("spring.datasource.driver-class-name", ORACLE::getDriverClassName);
        registry.add("spring.datasource.type", () -> "oracle.ucp.jdbc.PoolDataSource");
        registry.add("spring.datasource.oracleucp.connection-factory-class-name",
                () -> "oracle.jdbc.pool.OracleDataSource");
        registry.add("spring.datasource.oracleucp.connection-pool-name", () -> poolName);
        registry.add("spring.datasource.oracleucp.initial-pool-size", () -> "1");
        registry.add("spring.datasource.oracleucp.min-pool-size", () -> "1");
        registry.add("spring.datasource.oracleucp.max-pool-size", () -> "4");
        registry.add("spring.liquibase.url", ORACLE::getJdbcUrl);
        registry.add("spring.liquibase.user", () -> "system");
        registry.add("spring.liquibase.password", ORACLE::getPassword);
        registry.add("spring.liquibase.parameters.userRepoPassword", () -> USER_REPO_PASSWORD);
        registry.add("spring.liquibase.enabled", () -> "true");
        registry.add("azn.bootstrap-users.enabled", () -> "true");
        registry.add("azn.bootstrap-users.admin-password", () -> BOOTSTRAP_PASSWORD);
        registry.add("azn.bootstrap-users.user-password", () -> BOOTSTRAP_PASSWORD);
        registry.add("azn.authorization-server.default-client.secret", () -> "TestLocalClientSecret123!");
        registry.add("eureka.client.enabled", () -> "false");
        registry.add("spring.cloud.discovery.enabled", () -> "false");
        registry.add("spring.cloud.service-registry.auto-registration.enabled", () -> "false");
    }

This is a big benefit of the Oracle Testcontainers support. The tests exercise the actual database behavior: Liquibase, schema creation, identity columns, BCrypt hashes stored in the table, and the Spring Boot datasource configuration.

The authorization server integration test verifies metadata, JWKs, and token issuance:

    @Test
    void exposesAuthorizationServerMetadataAndJwks() {
        ResponseEntity<String> metadata = restTemplate.getForEntity(
                url("/.well-known/oauth-authorization-server"), String.class);
        ResponseEntity<String> jwks = restTemplate.getForEntity(url("/oauth2/jwks"), String.class);

        assertThat(metadata.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(metadata.getBody()).contains("authorization_endpoint", "token_endpoint", "jwks_uri");
        assertThat(jwks.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(jwks.getBody()).contains(""keys"");
    }

    @Test
    void issuesClientCredentialsAccessToken() {
        HttpHeaders headers = new HttpHeaders();
        headers.setBasicAuth("integration-client", "integration-secret");
        headers.setContentType(MediaType.APPLICATION_FORM_URLENCODED);
        MultiValueMap<String, String> body = new LinkedMultiValueMap<>();
        body.add("grant_type", "client_credentials");
        body.add("scope", "user.read");

        ResponseEntity<String> response = restTemplate.postForEntity(url("/oauth2/token"),
                new HttpEntity<>(body, headers), String.class);

        assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(response.getBody()).contains("access_token", "Bearer");
    }

And the user API integration test verifies that secrets are not leaked:

    @Test
    void adminCanCreateAndFindUserWithoutLeakingSecrets() {
        TestRestTemplate admin = restTemplate.withBasicAuth("obaas-admin", BOOTSTRAP_PASSWORD);
        Map<String, String> request = Map.of(
                "username", "api-user",
                "password", "StrongPass123!",
                "roles", "ROLE_USER",
                "email", "[email protected]");

        ResponseEntity<String> createResponse = admin.postForEntity(url("/user/api/v1/createUser"),
                request, String.class);
        ResponseEntity<String> findResponse = admin.getForEntity(url("/user/api/v1/findUser?username=api-user"),
                String.class);

        assertThat(createResponse.getStatusCode()).isEqualTo(HttpStatus.CREATED);
        assertThat(createResponse.getBody()).contains("api-user").doesNotContain("StrongPass123!");
        assertThat(findResponse.getStatusCode()).isEqualTo(HttpStatus.OK);
        assertThat(findResponse.getBody())
                .contains("api-user")
                .doesNotContain("StrongPass123!")
                .doesNotContain("otp");
        assertThat(userRepository.findByUsername("api-user"))
                .hasValueSatisfying(user -> {
                    assertThat(user.getPassword()).isNotEqualTo("StrongPass123!").startsWith("$2");
                    assertThat(passwordEncoder.matches("StrongPass123!", user.getPassword())).isTrue();
                });
    }

That is exactly the sort of test I like for this kind of application. It verifies behavior from the outside, and then checks the database-backed repository to confirm the security property we care about: the cleartext password was not stored.

Wrap up

We now have a working Spring Boot 3 authorization server backed by Oracle Database.

Spring Security and Spring Authorization Server give us the authentication framework, filter chains, method-level authorization, password encoding, OAuth2/OIDC endpoints, JWK support, and token issuance. Oracle Database gives us a proper persistent user repository with schema ownership, constraints, audit fields, identity columns, and real integration testing through Testcontainers.

There are a few production topics that deserve their own treatment, especially persistent signing keys, production registered-client storage, wallet-based database connectivity, deployment configuration, and observability. But the core pattern is here: let Spring Security handle security, let Oracle Database handle the durable user store, and keep the boundary between them small and explicit.

In a future post, we will take this example forward to the Spring Boot 4.x line and look at the associated new versions of Spring Framework and Spring Security.

]]>
https://redstack.dev/2026/04/30/building-an-authorization-server-with-spring-boot-3-and-oracle-database/feed/ 0 4307
Building a Spring Boot Starter for Oracle Spatial https://redstack.dev/2026/04/02/building-a-spring-boot-starter-for-oracle-spatial/ https://redstack.dev/2026/04/02/building-a-spring-boot-starter-for-oracle-spatial/#respond <![CDATA[Mark Nelson]]> Thu, 02 Apr 2026 18:41:00 +0000 <![CDATA[Uncategorized]]> <![CDATA[art]]> <![CDATA[design]]> <![CDATA[GeoJSON]]> <![CDATA[jdbcclient]]> <![CDATA[oracle]]> <![CDATA[photography]]> <![CDATA[religion]]> <![CDATA[Spatial]]> <![CDATA[Spring]]> <![CDATA[springboot]]> <![CDATA[starter]]> https://redstack.dev/?p=4299 <![CDATA[A behind-the-scenes look at how we built a new Spring Boot starter for Oracle Spatial, the feedback that changed the design, and why the final API looks the way it does. Continue reading ]]> <![CDATA[

Hi everyone!

Over the last few days I have been working on a new Spring Boot starter for Oracle Spatial.  I did this work with two other real developers, and three AI coding assistants, and I thought it would be interesting to write up the story of how it came together.

This was a good example of how building a starter is not just about getting code to compile or getting a sample app to run. It is also about API shape, developer expectations, reviewer feedback, naming, documentation, tests, and all of those little choices that decide whether something feels natural or awkward once another developer actually tries to use it.

We ended up in a much better place than where we started, but not because the first design was perfect. We got there because the reviews were good, the feedback was honest, and we were willing to change direction once it was clear the API could be better.

So this post is a bit of a behind-the-scenes look at that process.

What we were trying to build

The goal sounded simple enough on paper:

  • create a Spring Boot starter for Oracle Spatial
  • make it easy to work with SDO_GEOMETRY
  • keep the programming model GeoJSON-first
  • provide a sample application that shows realistic use

The kinds of queries we had in mind were the kinds of queries almost everyone reaches for first:

  • store a point or polygon
  • fetch it back as GeoJSON
  • find landmarks near a point
  • find landmarks within or interacting with a polygon

So from the beginning this was going to involve Oracle Spatial operators such as:

  • SDO_UTIL.FROM_GEOJSON
  • SDO_UTIL.TO_GEOJSON
  • SDO_FILTER
  • SDO_RELATE
  • SDO_WITHIN_DISTANCE
  • SDO_NN
  • SDO_GEOM.SDO_DISTANCE

We also wanted a sample app that felt approachable. We used a small San Francisco landmark dataset so the sample would be easy to understand and a little bit fun to play with.

Where we started

The first version of the starter leaned in the direction that I think a lot of us would naturally start with: helper utilities.

We had one piece focused on converting GeoJSON to and from Oracle Spatial, and another piece focused on generating the bits of SQL you need for common spatial predicates and projections.

On one level, that worked.

It absolutely solved real problems:

  • you did not have to remember the exact SDO_UTIL.FROM_GEOJSON(...) call
  • you did not have to hand-type the common predicate shapes every time
  • you could centralize the default SRID and distance unit

If you already like building SQL with JdbcClient, it was useful.

But there was a catch. The public API was still basically returning strings. And once that got called out in review, it was hard to ignore.

The review comment that changed everything

The most important reviewer comment was not about a syntax error or a missing test. It was about the shape of the API.

The feedback was basically: if this is a Spring Boot starter, why is the main experience a SQL string builder?

That was the right question.

It is one thing to have Spring-managed beans. It is another thing entirely to have a Spring-native programming model.

At that point the starter was Spring-managed, but not really Spring-native. Yes, you could inject the beans, but what you got back were still string fragments that you had to splice together yourself.

That triggered the main redesign.

The moment that made the redesign easier

One thing that helped a lot is that the original API had not been released yet.

That is a huge advantage.

It meant we did not need to protect old method names or preserve an API shape just because it already existed. We were free to ask a better question: if we were designing this from scratch for Spring developers, what should it look like?

Once we reframed it that way, the answer became much clearer.

What we changed

The big change was moving away from public string-builder beans and toward one main Spring JDBC integration bean:

  • OracleSpatialJdbcOperations

Instead of treating Oracle Spatial as a collection of string helper methods, we moved to a design where application code injects one bean and then creates typed spatial query parts:

  • SpatialGeometry
  • SpatialExpression
  • SpatialPredicate
  • SpatialRelationMask

This ended up being a much better fit for the way Spring JDBC code is usually written.

You still write SQL. We did not try to invent a whole DSL. But the spatial pieces now carry more meaning and stay connected to the bind process instead of floating around as anonymous fragments.

That was the key design improvement.

Why this felt better almost immediately

One of the things I liked about the redesign is that the sample application got better almost immediately once the API got better.

That is usually a good sign.

When an API is awkward, the sample tends to look awkward too. You can hide that for a little while, but not for long.

With the redesigned API, the service code started to read much more naturally. In the sample’s findNear flow, for example, we now create:

  • a SpatialGeometry
  • a distance expression
  • a within-distance predicate

and then build the SQL around those named pieces.

That may sound like a small thing, but it makes a big difference when someone is reading the sample for the first time and trying to understand the intended usage pattern.

Instead of “here is some mysterious Oracle SQL,” the code reads more like “here is the geometry we are searching around, here is the distance we want to project, and here is the predicate we want to apply.”

That is a much better teaching story.

The sample app became part of the design process

I think sometimes people talk about sample applications as though they are just an afterthought. In practice, for a starter like this, the sample is part of the design process.

It answers questions like:

  • does the API feel natural in a real service?
  • are the method names understandable?
  • can we explain this to somebody without too much ceremony?
  • does the code read like Spring, or does it read like a thin wrapper over raw SQL?

Our sample app is a simple REST service built around landmarks in San Francisco.

It has endpoints for:

  • creating a landmark
  • fetching a landmark by id
  • finding nearby landmarks
  • finding landmarks inside or interacting with a polygon

We also spent some time improving the seed data because I wanted the sample to be a little more recognizable and useful. So the sample now includes landmarks like:

  • Ferry Building
  • Union Square
  • Golden Gate Park
  • Oracle Park
  • Salesforce Tower
  • Transamerica Pyramid
  • Coit Tower

That may not be the most “architectural” part of the work, but it helps make the sample feel real instead of abstract.

One small enum that mattered more than I expected

Another design decision that turned out to matter a lot was replacing raw relation-mask strings with an enum:

  • SpatialRelationMask

This came directly out of review feedback.

The problem with raw strings is obvious once someone points it out: a developer can type something like "INTERSECTS" and everything looks fine until runtime, when Oracle tells them that is not a valid mask.

That is exactly the kind of thing a good starter should help with.

By introducing an enum, we made that part of the API:

  • safer
  • easier to discover
  • harder to misuse

It is a small API element, but it made the whole thing feel more deliberate.

Distance became first-class because it had to

One of the early reviews also pointed out that if we wanted to serve real spatial use cases, we needed first-class support for distance calculations.

That was absolutely right.

A lot of real applications want some version of:

  • find nearby things
  • return the distance
  • order by distance

If the starter handled filtering but not distance calculation, it would always feel like it stopped just short of the most useful scenario.

So SDO_GEOM.SDO_DISTANCE became part of the main API design rather than something developers would have to improvise themselves.

I think that made the starter much more credible for real use.

Documentation ended up being more important than I expected

I do not mean that in a generic “docs matter” way. I mean that for this starter in particular, the documentation had to explain a mental model, not just list methods.

The most useful documentation change we made was adding a clear distinction between:

  • what gets injected as a Spring bean
  • what gets created per query

That turned out to be the right way to explain the design.

You inject:

  • OracleSpatialJdbcOperations

And then per query you create:

  • SpatialGeometry
  • SpatialExpression
  • SpatialPredicate

Once we started explaining it that way, the docs got much easier to follow.

We also added concrete query pattern examples for:

  • inserts
  • GeoJSON projection
  • filter + relate
  • distance-ordered proximity queries

Those examples matter because they show how the design is actually supposed to be used. For a starter, that is every bit as important as the Javadoc.

Some of the polish came from the less glamorous review comments

Not all of the useful review feedback was high-level architecture. Some of it was the kind of practical feedback that makes software much more solid.

A few examples:

The sample needed better error handling

At one point the sample app would throw an exception and return a 500 if a caller passed an invalid spatial relation mask.

That is a perfectly believable bug in a sample. It is also exactly the kind of thing a reviewer should call out, because the sample is part of the product story.

We fixed it so that invalid masks now produce a proper 400 Bad Request with a useful message.

The tests needed to be more Spring-native too

We got feedback to use @ServiceConnection in the Testcontainers-based tests, and that was good advice. It made the tests more consistent with current Spring Boot style and reduced some manual wiring.

We also adjusted the integration test to run as the app user rather than system, which is a much better representation of how the code should actually work.

The SQL setup needed to be repeatable in CI

This is one of those things that only becomes obvious once CI starts yelling at you.

We hit issues with setup SQL being run more than once and colliding with existing objects or metadata. That led to a round of cleanup to make the test setup more idempotent and more robust.

That is not the glamorous part of building a starter, but it is absolutely part of shipping one.

What we deliberately did not do

There were also some things we chose not to do in this round of work.

I think that is worth talking about because saying “not yet” is often part of good design.

We did not try to eliminate SQL entirely

Even after the redesign, application code still assembles SQL statements with JdbcClient.

That was intentional.

We wanted to make the API more Spring-native, but not disappear into a custom DSL or pretend SQL no longer exists. This is still Spring JDBC. SQL is still the right abstraction level. The important improvement was to stop making the public API itself a string-builder API.

There is still room to improve the ergonomics in the future, especially around making it harder to forget a bind contributor. Reviewers called that out too, and I think they are right. But that felt like a v2 refinement, not something we needed to solve before this version was useful.

We did not add every possible spatial operation

Another good review point was that buffer generation would be useful too. I agree.

But once again, that felt like feature expansion rather than a cleanup item for this iteration.

There is always a temptation to keep adding one more thing once the codebase is open and fresh in your mind. In this case I think the right move was to get the core API shape right, get the sample and docs into good condition, and leave some room for future work.

What I think we ended up with

At the end of all of this, what we have is not just a set of helper methods. It is a small but coherent Spring Boot story for Oracle Spatial.

The final result includes:

  • a starter that auto-configures a Spring JDBC-oriented spatial bean
  • a typed API for spatial query parts
  • safer handling of SDO_RELATE masks
  • first-class distance support
  • a sample REST app that demonstrates realistic usage
  • docs that explain the mental model, not just the method list
  • integration tests that run against Oracle AI Database 26ai Free with Testcontainers

And maybe the biggest thing for me is that it now feels like something I would want another Spring developer to pick up and try.

That was not as true of the first version.

What we are already thinking about for v2

Even though I feel good about where this landed, we have also been pretty careful to write down the things that did not belong in this iteration.

There are at least two clear areas for a possible v2.

The first is improving the ergonomics around binding and query composition.

Right now the design is much better than the original string-builder approach, but there is still a pattern like this:

spatial.bind(
        jdbcClient.sql("select ... " + distance.selection("distance")
                + " from landmarks where " + within.clause()),
        distance, within)

That is a reasonable place to be for a JDBC-oriented starter, but it is not hard to imagine a future version that tightens that up further and makes it harder to forget a bind contributor.

The second is expanding the supported spatial operations.

One of the most obvious candidates there is buffer generation with SDO_GEOM.SDO_BUFFER. That came up during review, and I think it is a very good candidate for a future enhancement. There are also broader questions about whether repository-style integrations or richer mapping options might make sense once we have real experience with how people use the current API.

But I want to be careful here. Just because we can imagine a v2 does not mean we should rush into it. What I would really like at this point is to get feedback from actual users first.

I would much rather learn from people who try the starter on real spatial workloads than guess too aggressively about what the next abstraction should be. Maybe the next thing we need is buffer support. Maybe it is better row mapping. Maybe it is a tighter JdbcClient integration. Maybe it is something we have not even thought about yet.

So yes, we do have a v2 plan taking shape. But before we take that next step, I would love to hear from users and see how this first version holds up in practice.

Working with AI coding assistants

Like many people today, I am doing more and more work with AI Coding assistants.  For this piece of work, these are the participants and the roles they played:

  • me: collaborated with GPT to write the specification, set the standards, conventions, etc., guided the whole process, read and reviewed code myself, helped make, or made design decisions
  • GPT-5.4: acted as the primary developer, wrote most of the code, reflected on its own work, helped write and refine the specification, processed review comments and planned, helped with the design
  • Claude Code: acted as a “senior architect” who reviewed the code with a focus on technical aspects and gave detailed feedback and recommendations
  • Gemini (3 thinking/pro): acted as the “product manager” who reviewed the project from the point of view of how well it addressed the need, if it exposed the right features and capabilities, and how useful it would be for a spatial user
  • two human colleagues: acted as reviewers and developers who provided valuable feedback on the code and the design

As we worked through the project together, I saved a lot of the generated plans and reviews in the .ai directory because I think those are valuable artifacts.  We’re obviously using AI more, and we have no intention of hiding it, so why not save these for future use?  Seems like the right thing to do.  We did have a conversation about this before coming to the conclusion that we should save them and put them in the public repo.

I also want to mention that there were several cycles in the project.  We started discussing the requirements and writing a plan.  We reviewed this and iterated on it two or three times before we ever started writing any code.  By “we”, I mean me and the three AIs.  It was interesting to see the slightly different opinions of the AIs and the different areas they focused their attention on.  Just like human developers, the different opinions and insights were useful to help us get to a better result.

After we started development, we continued to cycle through reviews, feedback, updates.  At one point, as discussed above, we decided to redesign – this was a result of human feedback.  The AIs certainly did a good job, and they are getting better all the time, but I do still feel that they are not as good at doing novel things as they are at doing things that have been done before, and they perhaps don’t consider (or at least mention) the implications of architectural decisions as much as some human developers do, at least in my experience to date.  But I am sure they will continue to evolve rapidly.

I guess I’d just say – if you have not tried working in a team with multiple AI coding assistants yet, give it a go!

Why I wanted to write this up

I wanted to tell this story because I think it is a pretty normal and healthy example of how good API work actually happens.

You start with an idea.

You build something that is useful but not yet quite right.

Somebody asks a question that exposes the weakness in the design.

You resist it for a minute.

Then you realize they are right.

Then the real design work starts.

That is basically what happened here.

And I think the result is better precisely because we were willing to let the reviews change the direction of the code rather than just polish the original design.

Wrapping up

I am really happy with how this turned out.

Not because it is finished forever. It is not. There are still good future directions here. But the core design now feels solid, and the sample, tests, and docs all tell a coherent story.

That is what I wanted from this starter.

If you are working with Spring Boot and Oracle Database, and you have spatial use cases in mind, I think this gives you a pretty nice starting point. And if you are designing your own starter or library, maybe the bigger lesson here is that reviewer feedback is not just something to “address.” Sometimes it is the thing that helps you find the real design.

More on this soon.

]]>
https://redstack.dev/2026/04/02/building-a-spring-boot-starter-for-oracle-spatial/feed/ 0 4299
Building a Simple Spatial Web App with the New Spring Boot Starter for the Spatial Features in Oracle AI Database 26ai https://redstack.dev/2026/04/01/building-a-simple-spatial-web-app-with-the-new-spring-boot-starter-for-the-spatial-features-in-oracle-ai-database-26ai/ https://redstack.dev/2026/04/01/building-a-simple-spatial-web-app-with-the-new-spring-boot-starter-for-the-spatial-features-in-oracle-ai-database-26ai/#respond <![CDATA[Mark Nelson]]> Wed, 01 Apr 2026 21:38:09 +0000 <![CDATA[Uncategorized]]> <![CDATA[26ai]]> <![CDATA[ai]]> <![CDATA[GeoJSON]]> <![CDATA[oracle]]> <![CDATA[oracle23ai]]> <![CDATA[Spatial]]> <![CDATA[Spring]]> <![CDATA[sql]]> <![CDATA[technology]]> https://redstack.dev/?p=4262 <![CDATA[Hi everyone! In this post, I will show how to run the new Spring Boot starter for the Spatial features in Oracle AI Database 26ai by using the sample application and connecting it to a simple web application front end. … Continue reading ]]> <![CDATA[

Hi everyone!

In this post, I will show how to run the new Spring Boot starter for the Spatial features in Oracle AI Database 26ai by using the sample application and connecting it to a simple web application front end.

The starter makes it easier to build Spring Boot applications that work with Oracle Spatial data and expose that data through familiar REST APIs. For this example, I am using Oracle AI Database 26ai Free, the Spatial sample application from the Spring Cloud Oracle repository, and a small React front end that displays landmark data on a map.

The app will do three things:

  • load landmarks inside an initial San Francisco polygon
  • find the nearest landmarks to a point you click on the map
  • search the landmarks inside the current visible map area

The Spring Boot starter for the Spatial features in Oracle AI Database 26ai is a v1 release, and I want to say that clearly up front. If you build spatial applications with Spring Boot and Oracle Database, I would really love feedback on what works well, what feels awkward, and what you would want to see next.

A few important things to note before we get into the code:

  • the work is merged into main
  • the new starter artifacts may still be ahead of the next Maven Central release
  • for this walkthrough, I am running the sample application from a source checkout of the spring-cloud-oracle repository

That last point matters because I want to give you steps you can run today, not steps that depend on a release that may not have shipped yet.

Before You Begin

For this walkthrough, you will need:

  • Java 21 or newer
  • Maven
  • Node.js and npm
  • Docker
  • a local checkout of the spring-cloud-oracle repository
  • a local checkout of this frontend companion repository

You will also need Oracle AI Database 26ai Free running locally.

If you already have Oracle AI Database 26ai Free running and you already have a user/schema you want to use for testing, you can adapt the setup below. I am going to show the exact steps that worked for me from a clean local environment.

Start Oracle AI Database 26ai Free

Before you run the container, sign in to Oracle Container Registry and accept the Oracle terms for the Oracle AI Database 26ai Free image. If you skip that step, the image pull may fail with an authentication or authorization error.

Then start Oracle AI Database 26ai Free in a container:

 docker run -d \
   --name oracle-free \
   -p 1521:1521 \
   -e ORACLE_PWD=Welcome1 \
   container-registry.oracle.com/database/free:latest

The first startup will take a few minutes. You can check the logs to see when startup is complete:

docker logs -f oracle-free

Wait until you see the message DATABASE IS READY TO USE before continuing.

Create the Sample User

For local testing, the sample worked most reliably when I created a dedicated user for it and granted the permissions needed for the spatial objects and index.

Connect as the system user and run:

 create user spatialsample identified by spatialsample;
 grant create session to spatialsample;
 grant create table to spatialsample;
 grant create view to spatialsample;
 grant create sequence to spatialsample;
 grant create procedure to spatialsample;
 grant unlimited tablespace to spatialsample;
 grant create indextype to spatialsample;
 grant create operator to spatialsample;
 grant execute on mdsys.sdo_geometry to spatialsample;
 grant execute on mdsys.sdo_util to spatialsample;
 grant execute on mdsys.sdo_geom to spatialsample;
 grant execute on mdsys.sdo_cs to spatialsample;
 grant execute on mdsys.spatial_index_v2 to spatialsample;

I am calling this out explicitly because this is one of the places where a setup detail can quietly derail the rest of the sample. Creating the user and grants up front made the rest of the walkthrough much more predictable.

The explicit grant execute on mdsys.spatial_index_v2 may look a little unusual at first glance, but it is there for a reason. Oracle’s Spatial index documentation calls out the need for EXECUTE privilege on the index type and its implementation type when creating a spatial index.

Create the Schema and Seed Data

Next, connect as the new spatialsample user and set up the schema and initial sample data:

create table if not exists landmarks (
id number primary key,
name varchar2(200) not null,
category varchar2(100) not null,
geometry mdsys.sdo_geometry not null
);
delete from user_sdo_geom_metadata
where table_name = 'LANDMARKS'
and column_name = 'GEOMETRY';
insert into user_sdo_geom_metadata (table_name, column_name, diminfo, srid)
values (
'LANDMARKS',
'GEOMETRY',
mdsys.sdo_dim_array(
mdsys.sdo_dim_element('LONG', -180, 180, 0.005),
mdsys.sdo_dim_element('LAT', -90, 90, 0.005)
),
4326
);
create index if not exists landmarks_spatial_idx
on landmarks (geometry)
indextype is mdsys.spatial_index_v2;
delete from landmarks;
insert into landmarks (id, name, category, geometry)
values
(1, 'Ferry Building', 'MARKET', sdo_util.from_geojson('{"type":"Point","coordinates":[-122.3933,37.7955]}', null, 4326)),
(2, 'Union Square', 'PLAZA', sdo_util.from_geojson('{"type":"Point","coordinates":[-122.4074,37.7879]}', null, 4326)),
(3, 'Golden Gate Park', 'PARK', sdo_util.from_geojson('{"type":"Polygon","coordinates":[[[-122.511,37.771],[-122.454,37.771],[-122.454,37.768],[-122.511,37.768],[-122.511,37.771]]]}', null, 4326)),
(4, 'Oracle Park', 'STADIUM', sdo_util.from_geojson('{"type":"Point","coordinates":[-122.3893,37.7786]}', null, 4326)),
(5, 'Salesforce Tower', 'SKYSCRAPER', sdo_util.from_geojson('{"type":"Point","coordinates":[-122.3969,37.7897]}', null, 4326));
commit;

At this point, I have:

  • a landmarks table
  • spatial metadata in USER_SDO_GEOM_METADATA
  • a spatial index
  • seeded landmarks to query

This is one of those places where I think being explicit helps. It is very easy to gloss over schema setup in a sample, but for Oracle Spatial this part is important:

  • the table exists
  • the geometry metadata exists
  • the spatial index exists
  • the sample data is already in place

Without those pieces, the rest of the walkthrough is much harder to follow.

I also like this version of the setup script because it is easier to rerun while you are testing locally. It clears the metadata entry and seed data before recreating the parts of the sample that need to be there.

Run the Official Sample Application

The backend for this walkthrough is the official sample application from the Spring Cloud Oracle repository.

The command that worked reliably for me was:

cd ~/spring-cloud-oracle/database/starters/oracle-spring-boot-starter-samples/oracle-spring-boot-sample-spatial
mvn spring-boot:run \
-Dspring-boot.run.arguments="--spring.datasource.url=jdbc:oracle:thin:@//localhost:1521/FREEPDB1
--spring.datasource.username=spatialsample --spring.datasource.password=spatialsample"

Note that I split that string for readability, but you need to put it all on one line!

There are two details here that are worth calling out:

  • I ran the command from the sample module directory
  • passing the datasource properties through spring-boot.run.arguments was the most reliable local approach

That is worth preserving exactly. I tried a couple of other ways to pass the datasource values, and this was the one that consistently worked for me.

If that starts cleanly, the sample backend should now be listening on http://localhost:9002.

What the Sample Already Gives Us

Before I write any frontend code, it is worth looking at what the sample already gives us.

The REST API is small and very easy to explain:

  • POST /landmarks
  • GET /landmarks/{id}
  • GET /landmarks/near
  • POST /landmarks/within

That is a useful shape for a tutorial because it is already enough to demonstrate:

  • GeoJSON in and GeoJSON out
  • nearest-neighbor style point searches
  • polygon-based area searches

Add Local CORS for the Frontend

When I first tried to connect the frontend to the sample application, I hit a CORS problem. For local development, the fix was to add @CrossOrigin to LandmarkController.java in the sample application.

Add this import:

 import org.springframework.web.bind.annotation.CrossOrigin;

Then annotate the controller:

 @CrossOrigin(origins = "http://localhost:5173")
 @RestController
 public class LandmarkController {
 // ...
 }
 

This is just for local development. I would not copy this unchanged into a production application.

Also, if you skip this and go straight to the frontend, the browser error is not especially helpful. You will usually just see a generic “failed to fetch” style error when the request to /landmarks/within gets blocked.

Smoke-Test the REST API

Before building the frontend, I like to make sure the API is behaving the way I expect.

Let’s start with a few direct calls to the sample backend.

A simple lookup by id:

curl http://localhost:9002/landmarks/1

A nearest-neighbor style request using a compact GeoJSON point in the query string:


 curl --get "http://localhost:9002/landmarks/near" \
 --data-urlencode 'geometry={"type":"Point","coordinates":[-122.3933,37.7955]}' \
 --data-urlencode 'distance=2000' \
 --data-urlencode 'limit=3'

A polygon search:


 curl -X POST http://localhost:9002/landmarks/within \
 -H "Content-Type: application/json" \
 -d '{
 "geometry":"{\"type\":\"Polygon\",\"coordinates\":[[[-122.515,37.75],[-122.35,37.75],[-122.35,37.808],[-122.515,37.808],[-122.515,37.75]]]}",
 "mask":"ANYINTERACT"
 }'
 

At this point, it is useful to verify that the backend is returning the kind of data we expect before moving on to the browser. The following example shows the API working from the command line.

If these calls work, the rest of the article gets a lot simpler. At that point, we know the database is up, the sample app is talking to it, and the spatial endpoints are behaving before the browser gets involved.

The Frontend Project

For the UI, I used a separate React application built with Vite, TypeScript, and React Leaflet.

Here is the full layout of the web directory before I walk through the main pieces:


 web/
   src/
     App.tsx
     components/
       MapView.tsx
     lib/
       api.ts
       geo.ts
 

I kept this frontend separate from the sample app because it makes the browser-side behavior easier to explain, and it is a very common setup that most people will recognize immediately.

I also like this split for demo applications because it keeps the browser code and the backend code easy to reason about independently.

Start the Frontend

In this frontend project, start the development server like this:


 cd ~/spatial-starter-v1-blog/web
 VITE_API_BASE_URL=http://localhost:9002 npm run dev
 

By default, Vite runs on http://localhost:5173, which is why that is the origin I allowed in the local CORS annotation above.

The Initial Map Load

For the initial screen, I did not want a generic “list all landmarks” endpoint. Since this is a spatial application, I wanted the initial load to already be spatially scoped.

So instead, I defined an opening polygon for a useful part of San Francisco:

  • north: Fisherman’s Wharf
  • east: Oracle Park
  • south: Mission
  • west: the western edge of Golden Gate Park

In the frontend, that polygon is just a small GeoJSON object:

const INITIAL_WEST = -122.515;
const INITIAL_EAST = -122.35;
const INITIAL_SOUTH = 37.75;
const INITIAL_NORTH = 37.808;
export const initialSearchPolygon = {
type: "Feature",
properties: {
label: "Initial San Francisco search area"
},
geometry: {
type: "Polygon",
coordinates: [[
[INITIAL_WEST, INITIAL_SOUTH],
[INITIAL_EAST, INITIAL_SOUTH],
[INITIAL_EAST, INITIAL_NORTH],
[INITIAL_WEST, INITIAL_NORTH],
[INITIAL_WEST, INITIAL_SOUTH]
]]
}
};

Then I post that polygon to /landmarks/within with the ANYINTERACT mask.

That gives the initial screen a much more natural map behavior. Instead of “show me everything,” the app starts with “show me the landmarks in this area.”

One small helper worth showing here is geometryToString. The sample accepts the geometry field as a JSON string, not as a nested object, so the frontend needs to serialize the GeoJSON before sending it.

export function geometryToString(feature: Feature): string {
return JSON.stringify(feature.geometry);
}

With that helper in place, the API call stays pretty small:

export async function fetchLandmarksWithin(feature: Feature<Geometry>, mask: string): Promise<Landmark[]> {
const response = await fetch(`${API_BASE_URL}/landmarks/within`, {
method: "POST",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify({
geometry: geometryToString(feature),
mask
})
});
if (!response.ok) {
throw new Error(await response.text());
}
return response.json() as Promise<Landmark[]>;
}

And the initial load in the app looks like this:

useEffect(() => {
void loadInitialArea();
}, []);
async function loadInitialArea() {
setLoading(true);
setError(null);
setActiveFeature(initialSearchPolygon as Feature<Geometry>);
setNearestSearchRadius(null);
try {
const landmarks = await fetchLandmarksWithin(initialSearchPolygon as Feature<Geometry>, "ANYINTERACT");
setSearchResults(makeResultSet("initial-area", landmarks));
mapRef.current?.fitBounds([
[37.75, -122.515],
[37.808, -122.35]
]);
} finally {
setLoading(false);
}
}

Once the initial polygon query is wired up, the application starts with a focused view of San Francisco and only the landmarks that fall inside that opening search area.

Rendering Points and Polygons

One of the nice things about the sample is that the API boundary stays GeoJSON-first. That means the frontend does not need a special Oracle representation. It can just parse the returned GeoJSON and render it.

I render polygon landmarks with GeoJSON, and point landmarks with CircleMarker.

{areaFeatures.map((landmark) => (
<GeoJSON
key={landmark.id}
data={parseLandmarkGeometry(landmark.geometry) as Polygon}
style={() => ({
color: "#0f766e",
weight: 2,
fillColor: "#14b8a6",
fillOpacity: 0.22
})}
/>
))}
{pointFeatures.map((landmark) => {
const geometry = parseLandmarkGeometry(landmark.geometry) as Point;
const [longitude, latitude] = geometry.coordinates;
return (
<CircleMarker
key={landmark.id}
center={[latitude, longitude]}
radius={9}
pathOptions={{
color: "#ffffff",
weight: 2,
fillColor: "#ea580c",
fillOpacity: 0.95
}}
/>
);
})}

That is the core value proposition here. The backend gets to use Oracle Spatial, and the browser gets to stay in familiar GeoJSON territory.

For a web application, that is exactly the split I want.

Nearest Landmarks from a Clicked Point

The next feature is the one I wanted most for the demo: click on the map, send a point to the backend, and get the nearest landmarks back.

On click, I convert the map location into a GeoJSON point:

export function pointFeatureFromCoordinates(longitude: number, latitude: number): Feature<Geometry> {
return {
type: "Feature",
properties: {
label: "Selected point"
},
geometry: {
type: "Point",
coordinates: [longitude, latitude]
}
};
}

Then I send that point to /landmarks/near with distance and limit values:

export async function fetchNearbyLandmarks(point: Feature<Geometry>, distance: number, limit: number): Promise<Landmark[]> {
const params = new URLSearchParams({
geometry: geometryToString(point),
distance: String(distance),
limit: String(limit)
});
const response = await fetch(`${API_BASE_URL}/landmarks/near?${params.toString()}`);
if (!response.ok) {
throw new Error(await response.text());
}
return response.json() as Promise<Landmark[]>;
}

One small UX detail I liked here was keeping the clicked point visible and drawing a circle for the search radius.

{nearestSearchRadius ? (
<Circle
center={[activePoint.geometry.coordinates[1], activePoint.geometry.coordinates[0]]}
radius={nearestSearchRadius}
pathOptions={{
color: "#f97316",
weight: 2,
fillColor: "#fdba74",
fillOpacity: 0.12,
dashArray: "10 6"
}}
/>
) : null}
<CircleMarker
center={[activePoint.geometry.coordinates[1], activePoint.geometry.coordinates[0]]}
radius={8}
pathOptions={{
color: "#ffffff",
weight: 3,
fillColor: "#c2410c",
fillOpacity: 1
}}
/>

That makes the search feel much more obvious when you use the map.

It is a small detail, but it makes the demo much easier to understand visually. You can immediately see both the point you picked and the radius you asked the backend to search within.

The screenshot below shows that interaction in context, with the selected point on the map and the search radius drawn around it.

Search the Visible Area

The other interaction I wanted was the ability to search the current map area.

For that, I turn the map bounds into a polygon and post it to /landmarks/within:

export function boundsToPolygon(bounds: LatLngBounds): Feature<Polygon> {
const west = bounds.getWest();
const east = bounds.getEast();
const south = bounds.getSouth();
const north = bounds.getNorth();
return {
type: "Feature",
properties: {
label: "Visible map area"
},
geometry: {
type: "Polygon",
coordinates: [[
[west, south],
[east, south],
[east, north],
[west, north],
[west, south]
]]
}
};
}

Then the app sends that polygon with the currently selected mask:

async function handleVisibleAreaSearch() {
if (!visibleBounds) {
return;
}
const polygon = boundsToPolygon(visibleBounds) as Feature<Geometry>;
setActiveFeature(polygon);
setNearestSearchRadius(null);
const landmarks = await fetchLandmarksWithin(polygon, mask);
setSearchResults(makeResultSet("area-search", landmarks));
}

I kept the mask choices simple for this version:

  • ANYINTERACT
  • INSIDE

ANYINTERACT returns any landmark that touches or overlaps the search polygon, which is useful when a geometry sits on the edge of the visible area. INSIDE returns only landmarks whose geometry falls entirely within the polygon.

After panning the map and running the visible-area query, the UI updates to reflect the current viewport instead of the original startup polygon.

Why the GeoJSON Boundary Matters

There are a few reasons I like this sample as a starting point.

First, the sample stays GeoJSON-first at the API boundary. That is exactly what I want for browser-facing map work.

Second, the backend makes the spatial queries feel very normal from a Spring developer perspective. You are still building a simple REST application, not learning a whole new framework just to use Oracle Spatial.

Third, the v1 scope feels very reasonable. It gives me enough to build real point and polygon workflows, and it gives me something concrete to build on without trying to solve every spatial use case at once.

That said, this is also why I would really like feedback from people who actively use Oracle Spatial and Spring Boot. The best next steps will probably come from the people doing this work for real.

Wrapping Up

For this example, I used the new Oracle Spatial Spring Boot sample as the backend, then layered a small React + Leaflet application on top of it to:

  • load a meaningful initial area
  • search for nearby landmarks from a clicked point
  • query landmarks inside the current visible map area

The part I like most is that nothing about the frontend had to become Oracle-specific. The browser stayed in GeoJSON and map primitives, which is exactly where I want it.

For a Spring developer, that is one of the most useful parts of this v1 starter. It makes it much easier to work with spatial data in a normal Spring Boot application without having to invent the basic plumbing first.

If you are using Oracle Spatial with Spring Boot, I would genuinely love feedback on what works well, what feels awkward, and what you would want to see next.

Links

]]>
https://redstack.dev/2026/04/01/building-a-simple-spatial-web-app-with-the-new-spring-boot-starter-for-the-spatial-features-in-oracle-ai-database-26ai/feed/ 0 4262
Exploring some new Helidon features – Data Repositories and SE Declarative https://redstack.dev/2026/03/05/exploring-some-new-helidon-features-data-repositories-and-se-declarative/ https://redstack.dev/2026/03/05/exploring-some-new-helidon-features-data-repositories-and-se-declarative/#respond <![CDATA[Mark Nelson]]> Fri, 06 Mar 2026 01:58:29 +0000 <![CDATA[Uncategorized]]> <![CDATA[26ai]]> <![CDATA[ai]]> <![CDATA[coding]]> <![CDATA[Data]]> <![CDATA[Declarative]]> <![CDATA[duality]]> <![CDATA[Helidon]]> <![CDATA[Injection]]> <![CDATA[IoC]]> <![CDATA[Java]]> <![CDATA[JSON]]> <![CDATA[oracle]]> <![CDATA[repository]]> <![CDATA[SE]]> <![CDATA[technology]]> https://redstack.dev/?p=4251 <![CDATA[With Helidon 4.4.0 right around the corner, I’ve been spending some time playing with the latest milestone release (4.4.0-M2). If you haven’t been following along, Helidon 4 was a major milestone because it was the first framework built from the … Continue reading ]]> <![CDATA[

With Helidon 4.4.0 right around the corner, I’ve been spending some time playing with the latest milestone release (4.4.0-M2). If you haven’t been following along, Helidon 4 was a major milestone because it was the first framework built from the ground up on Java 21 virtual threads.

Now, with 4.4, we are seeing some really cool incubating features becoming more stable inlcuding Helidon Data Repositories, Helidon SE Declarative and Helidon AI. I am working on a sample project called Helidon-Eats to show how these work together. In this installment, I am looking at the first two.

The “No-Magic” Power of Helidon SE Declarative

If you’ve used Helidon MP (MicroProfile), or if you are more familiar with Spring Boot’s Inversion of Control approach (like me), then you’re used to the convenience of dependency injection and annotations. Helidon SE, on the other hand, has always focused on transparency and avoiding “magic.” While that’s great for performance, it usually meant writing more boilerplate code to register routes and manage services manually.

Helidon SE Declarative changes that. It gives you an annotation-driven model like MP, but here is the trick: it does everything at build-time. Using Java annotation processors, Helidon generates service descriptors during compilation. This means you get the clean, injectable code you want, but without the runtime reflection overhead that slows down startup and eats memory. Benchmarks have even shown performance gains of up to +295% over traditional reflection-based models on modern JDKs. (see this article)

Now, to be completely fair, I will say that I am not completely sold on the build-time piece yet. For example, the @GenerateBinding annotation (if I am not mistaken) causes an ApplicationBinding class to be generated at build time, and that lives in your target/classes directory. I found during refactoring that you have to be careful to mvn clean each time to make sure it stays in sync with your code, just doing a mvn compile or package could get you into trouble. And I am not sure I am happy with it not being checked into the source code repository. But, I’ll withhold judegment until I have worked with it a bit more!

Simplifying Persistence with Helidon Data

The Helidon Data Repository is another big addition. It’s a high-level abstraction that acts as a compile-time alternative to heavy runtime ORM frameworks. Instead of writing JDBC code, you define a Java interface, and Helidon’s annotation processor generates the implementation for you.

It supports standard patterns like CrudRepository and PageableRepository, which I used in this project to handle the recipe collection. The framework can even derive queries directly from your method names (like Spring Data does) – so a method like findById is automatically turned into the correct SQL at build-time.

The Backend: Oracle AI Database 26ai

For this sample, I’m using Oracle AI Database 26ai Free. I sourced some public domain recipe data from Kaggle that comes as a line-by-line JSON file (LDJSON).

Normally, if you want to store hierarchical JSON in relational tables, you have to write complex mapping logic in your application. But I wanted to try a more novel approach using JSON Relational Duality Views (DV).

Duality Views are a game-changer because they decouple how data is stored from how it is accessed. My data is stored in three normalized tables (RECIPE, INGREDIENT, and DIRECTION) which ensures ACID consistency and no data duplication. The database can surface this data to applications as a single, hierarchical JSON document. I am not using that feature in this post, but I will in the future!

GraphQL-Based View Creation and Loading

One of the coolest parts of Oracle AI Database 26ai is that you can define these views using a GraphQL-based syntax . The database engine automatically figures out the joins based on the foreign key relationships.

Here is how I defined the recipe_dv:

CREATE OR REPLACE JSON RELATIONAL DUALITY VIEW recipe_dv AS
recipe @insert @update @delete
{
  recipeId: id,
  recipeTitle: title,
  description: description,
  category: category,
  subcategory: subcategory,
  ingredients: ingredient @insert @update @delete
  [
    {
      id: id,
      item: item
    }
  ],
  directions: direction @insert @update @delete
  [
    {
      id: id,
      step: step
    }
  ]
};

Isn’t that just the cleanest piece of SQL that deals with JSON that you’ve ever seen?

Because the view is “updatable” (@insert, @update), I used it to actually load the data. Instead of a complex ETL process, my startup script just reads the LDJSON file line-by-line and does a simple SQL insert directly into the view. The database engine takes that single JSON object and automatically decomposes it into rows for the three underlying tables.

Modeling the Service

On the Java side, I modeled the Recipe entity to handle the parent-child relationships using standard @OneToMany collections.

One detail I want to highlight is the use of @JsonbTransient. When you build a REST API, you often have internal metadata like database primary keys or sort ordinals that you don’t want messing up the JSON that the end user gets to see. By annotating those fields with @JsonbTransient, they are excluded from the final JSON response. This keeps the API response clean and focused only on the recipe data.

@Entity
@Table(name = "RECIPE")
public class Recipe {
    @Id
    @Column(name = "ID")
    private Long recipeId;
    
    private String recipeTitle;

    @JsonbTransient
    private Long internalId; // Hidden from the API
    
    @OneToMany(mappedBy = "recipe")
    private List<Ingredient> ingredients;
    
    //...
}

In the repository object, you can use the method naming conventions to automatically create queries (like Spring Data) and you can also write your own JPQL (not SQL) queries, as I did in this case (also like Spring Data):

package com.github.markxnelson.helidoneats.recipes.model;

import java.util.Optional;

import io.helidon.data.Data;

@Data.Repository
public interface RecipeRepository extends Data.CrudRepository<Recipe, Integer> {

    @Data.Query("SELECT DISTINCT r FROM Recipe r "
        + "LEFT JOIN FETCH r.ingredients "
        + "LEFT JOIN FETCH r.directions "
        + "WHERE r.recipeId = :recipeId")
    Optional<Recipe> findByRecipeIdWithDetails(Integer recipeId);

}

Wiring and Startup

The configuration is handled in the application.yaml, where I point Helidon to the Oracle instance using syntax that again is very reminiscent of what I’d do in Spring Boot.

server:
  port: 8080
  host: 0.0.0.0
app:
    greeting: "Hello"

data:
  sources:
    sql:
      - name: "food"
        provider.hikari:
          username: "food"
          password: "Welcome12345##"
          url: "jdbc:oracle:thin:@//localhost:1521/freepdb1"
          jdbc-driver-class-name: "oracle.jdbc.OracleDriver"
  persistence-units:
    jakarta:
      - name: "recipe"
        data-source: "food"
        properties:
          hibernate.dialect: "org.hibernate.dialect.OracleDialect"
          jakarta.persistence.schema-generation.database.action: "none"

With Declarative SE, injecting the repository into my endpoint is simple. I just use @Service.Inject on the constructor, which allows me to keep my fields private final.

@Service.Singleton
@Http.Path("/recipe")
public class RecipeEndpoint {
    private final RecipeRepository repository;

    @Service.Inject
    public RecipeEndpoint(RecipeRepository repository) {
        this.repository = repository;
    }

    @Http.GET
    @Http.Path("/{id}")
    public Optional<Recipe> getRecipe(Long id) {
        return repository.findById(id);
    }
}

Finally, the Main class uses @Service.GenerateBinding. This tells the annotation processor to generate the “wiring” code that starts the server and initializes the service registry without needing to scan the classpath at runtime.

@Service.GenerateBinding
public class Main {
    public static void main(String args) {
        LogConfig.configureRuntime();
        ServiceRegistryManager.start(ApplicationBinding.create());
    }
}

In this context, the “service registry” is something in Helidon that keeps track of the services in the application and handles injection and so on. It’s a lot like the way Spring Boot scans for beans and wires/injects them where needed.

The Result

When you hit the service, you get a clean, well structured JSON response that masks all the complexity of the underlying three-table relational join.

Example Response for http://localhost:8080/recipe/22387:

{
  "category": "Appetizers And Snacks",
  "description": "I came up with this rhubarb salsa while trying to figure out what to do with an over-abundance of rhubarb...",
  "directions":,
  "ingredients": [
    "2 cups thinly sliced rhubarb",
    "1 small red onion, coarsely chopped",
    "3 roma (plum) tomatoes, finely diced"
  ],
  "recipeId": 22387,
  "recipeTitle": "Tangy Rhubarb Salsa",
  "subcategory": "Salsa"
}

Wrap Up

Helidon 4.4 is making the SE flavor feel a lot more like a high-productivity framework without sacrificing performance. By shifting the data transformation logic to the database with Duality Views and using build-time code generation for injection, we can build services that are both incredibly fast and easy to maintain.

Now, you may have noticed that I said “like Spring” a lot in this post – and that’s because of two things – I do happen to use Spring a lot more than I use Helidon, and I like it. So I am very happy that Helidon is looking more like Spring, it makes it a lot easier to switch between the two, and I think it lowers the barrier to entry for people who are coming from the Spring world.

Grab the code from the Helidon-Eats repo and let me know what you think – and stay tuned for the next steps as I explore Helidon AI!

]]>
https://redstack.dev/2026/03/05/exploring-some-new-helidon-features-data-repositories-and-se-declarative/feed/ 0 4251
Using Reflection to Help LLMs Write Better SQL https://redstack.dev/2025/11/12/using-reflection-to-help-llms-write-better-sql/ https://redstack.dev/2025/11/12/using-reflection-to-help-llms-write-better-sql/#respond <![CDATA[Mark Nelson]]> Wed, 12 Nov 2025 22:18:47 +0000 <![CDATA[Uncategorized]]> https://redstack.dev/?p=4244 <![CDATA[Getting LLMs to write good SQL can be tricky. Sure, they can generate syntactically correct queries, but do those queries actually answer the question you asked? Sometimes an LLM might give you technically valid SQL that doesn’t quite capture what … Continue reading ]]> <![CDATA[

Getting LLMs to write good SQL can be tricky. Sure, they can generate syntactically correct queries, but do those queries actually answer the question you asked? Sometimes an LLM might give you technically valid SQL that doesn’t quite capture what you’re really looking for.

I wanted to experiment with the reflection pattern to see if we could get better results. The idea is simple: after the LLM generates SQL and executes it, have it reflect on whether the query actually answers the original question. If not, let it try again with the benefit of seeing both the question and the initial results.

Let me show you how this works.

Setting up the database

I used an Oracle Autonomous Database on Oracle Cloud for this experiment. First, I created a user with the necessary permissions. Connect as ADMIN and run this:

create user moviestream identified by <password>;
grant connect, resource, unlimited tablespace to moviestream;
grant execute on dbms_cloud to moviestream;
grant execute on dbms_cloud_repo to moviestream;
grant create table to moviestream;
grant create view to moviestream;
grant all on directory data_pump_dir to moviestream;
grant create procedure to moviestream;
grant create sequence to moviestream;
grant create job to moviestream;

Next, let’s load the sample dataset. Still as ADMIN, run this:

declare 
    l_uri varchar2(500) := 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/c4u04/b/building_blocks_utilities/o/setup/workshop-setup.sql';
begin
    dbms_cloud_repo.install_sql(
        content => to_clob(dbms_cloud.get_object(object_uri => l_uri))
    );
end;
/

Then connect as the moviestream user and run this to load the rest of the dataset:

BEGIN
    workshop.add_dataset(tag => 'end-to-end');
END;
/

This takes a few minutes to complete, after which we have a database with customer and sales data to work with.

The approach

The reflection pattern works like this:

  1. Give the LLM the database schema and a natural language question
  2. LLM generates SQL (v1)
  3. Execute the SQL and get results
  4. LLM reflects: “Does this SQL actually answer the question?”
  5. Generate improved SQL (v2) based on the reflection
  6. Execute v2 and provide the final answer

The key insight here is that by seeing the actual results, the LLM can judge whether it interpreted the question correctly. For example, if you ask “who are our top customers?”, the LLM might initially think “highest income” when you actually meant “highest spending”. Seeing the results helps it course-correct.

Setting up the Python environment

I used a Jupyter notebook for this experiment. First, let’s install the libraries we need:

%pip install aisuite oracledb 

I’m using Andrew Ng’s aisuite for a unified interface to different LLM providers, and oracledb to connect to the database.

Now let’s import aisuite:

import aisuite as ai

Connecting to Oracle Autonomous Database

For Oracle Autonomous Database, you’ll need to download the wallet and set up the connection. Here’s how I connected:

import oracledb

username = "moviestream"
password = "<password>"
dsn = "<connection_string>"
wallet = '<path_to_wallet>'

try:
    connection = oracledb.connect(
        user=username, 
        password=password, 
        dsn=dsn,
        config_dir=wallet,
        wallet_location=wallet,
        wallet_password='<wallet_password>')
    print("Connection successful!")
except Exception as e:
    print(e)
    print("Connection failed!")

And set the TNS_ADMIN environment variable:

import os
os.environ['TNS_ADMIN'] = wallet

Configuring the LLM client

Let’s set up the AI client. I used GPT-4o for this experiment:

client = ai.Client()
os.environ['OPENAI_API_KEY']='<your_api_key>'

models = ['openai:gpt-4o']

Getting the database schema

For the LLM to write good SQL, it needs to know what tables and columns are available. Let’s write a function to introspect the schema:

def get_schema():
    stmt = f'''
    SELECT 
        utc.table_name,
        utc.column_name,
        utc.data_type,
        utc.data_length,
        utc.nullable,
        utc.column_id,
        ucc.comments AS column_comment,
        utab.comments AS table_comment
    FROM 
        user_tab_columns utc
    LEFT JOIN 
        user_col_comments ucc 
        ON utc.table_name = ucc.table_name 
        AND utc.column_name = ucc.column_name
    LEFT JOIN 
        user_tab_comments utab 
        ON utc.table_name = utab.table_name
    ORDER BY 
        utc.table_name, 
        utc.column_id;
    '''

    cursor = connection.cursor()
    cursor.execute(stmt)
    rows = cursor.fetchall()

    # Convert to one long string
    result_string = '\n'.join([str(row) for row in rows])

    cursor.close()

    return result_string

This function queries the Oracle data dictionary to get information about all tables and columns, including any comments. It returns everything as a single string that we can pass to the LLM.

Generating SQL from natural language

Now let’s write the function that takes a natural language question and generates SQL:

def generate_sql(question: str, schema: str, model: str):
    prompt = f'''
    You are an SQL assistant for Oracle Database.
    You create Oracle SQL statements to help answer user questions.
    Given the user's question and the schema information, write an SQL
    query to answer the question.

    Schema:
    {schema}

    User question:
    {question}

    Respond with the SQL only.  Do not add any extra characters or delimiters.
    '''
    response = client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": prompt}],
        temperature=0,
    )
    return response.choices[0].message.content.strip()

This function takes the question, the schema information, and the model to use. It constructs a prompt that gives the LLM the context it needs and asks for just the SQL query.

Executing SQL queries

We need a function to actually run the generated SQL:

def execute_sql(stmt):
    cursor = connection.cursor()
    cursor.execute(stmt)
    rows = cursor.fetchall()

    # Convert to one long string
    result_string = '\n'.join([str(row) for row in rows])

    cursor.close()

    return result_string

This executes the query and returns the results as a string.

The reflection step

Here’s where it gets interesting – the function that reviews the SQL and results, and potentially generates improved SQL:

import json

def refine_sql(question, sql_query, output, schema, model):
    prompt = f'''
    You are a SQL reviewer and refiner. 

    User asked:
    {question}

    Original SQL:
    {sql_query}

    SQL Output:
    {output}

    Schema:
    {schema}

    Step 1: Evaluate if the SQL OUTPUT fully answers the user's question.
    Step 2: If improvement is needed, provide a refined SQL query for Oracle.
    If the original SQL is already correct, return it unchanged.

    Return a strict JSON object with two fields:
    - "feedback": brief evaluation and suggestions
    - "refined_sql": the final SQL to run

    Return ONLY the actual JSON document.
    Do NOT add any extra characters or delimiters outside of the actual JSON itself.
    In particular do NOT include backticks before and after the JSON document.
    '''

    response = client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": prompt}],
        temperature=0,
    )

    content = response.choices[0].message.content
    try:
        obj = json.loads(content)
        feedback = str(obj.get("feedback", "")).strip()
        refined_sql = str(obj.get("refined_sql", sql_query)).strip()
        if not refined_sql:
            refined_sql = sql_query
    except Exception:
        # Fallback if model doesn't return valid JSON
        feedback = content.strip()
        refined_sql = sql_query

    return feedback, refined_sql

This is the heart of the reflection pattern. The function:

  1. Shows the LLM the original question, the generated SQL, and the actual results
  2. Asks it to evaluate whether the SQL output really answers the question
  3. If not, asks for an improved query
  4. Returns both the feedback and the refined SQL as JSON

The JSON format makes it easy to parse the response and extract both pieces of information. I had to be fairly pedantic to get gpt-4o to give me just JSON!

Providing a final answer

Finally, let’s write a function to convert the query results into a natural language answer:

def provide_final_answer(question, output, model):
    prompt = f'''
    You are helpful assistant.
    Given a user's question, and the results of a database query
    which has been created, evaluated, improved and executed already
    in order to get the provided output, you should provide an
    answer to the user's question.

    User question:
    {question}

    Query results:
    {output}
    '''
    response = client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": prompt}],
        temperature=0,
    )
    return response.choices[0].message.content.strip()

This takes the final query results and turns them into a friendly, natural language response for the user.

Putting it all together

Now let’s create the main function that orchestrates the entire process:

def generate_and_reflect(question: str):
    
    schema = get_schema()
    print('SCHEMA')
    print(schema)
    print()

    sql_v1 = generate_sql(question, schema, models[0])
    print("SQL V1")
    print(sql_v1)
    print()

    output_v1 = execute_sql(sql_v1)
    print("SQL V1 output")
    print(output_v1)
    print()

    feedback, sql_v2 = refine_sql(question, sql_v1, output_v1, schema, models[0])
    print("FEEDBACK")
    print(feedback)
    print()
    print("SQL V2")
    print(sql_v2)
    print()

    output_v2 = execute_sql(sql_v2)
    print("SQL V2 output")
    print(output_v2)
    print()

    final_answer = provide_final_answer(question, output_v2, models[0])
    print("FINAL ANSWER")
    print(final_answer)
    print()

This function:

  1. Gets the database schema
  2. Generates the first SQL query
  3. Executes it and prints the results
  4. Sends everything to the reflection function for evaluation
  5. Generates and executes the refined SQL
  6. Converts the final results into a natural language answer

Running the experiment

Let’s try it out with a question that could be interpreted multiple ways:

generate_and_reflect('who are our top customers?')

The results

Here’s what happened when I ran this:

First attempt (SQL V1):

SELECT CUST_ID, FIRST_NAME, LAST_NAME, EMAIL, INCOME
FROM CUSTOMER
ORDER BY INCOME DESC
FETCH FIRST 10 ROWS ONLY;

The LLM interpreted “top customers” as customers with the highest income. It returned folks with incomes around $187,000:

(1138797, 'Haruru', 'Takahashi', '[email protected]', 187168.8)
(1007335, 'Eddie', 'Crawford', '[email protected]', 187145.4)
(1404002, 'Yuuto', 'Arai', '[email protected]', 187136.04)
...

Reflection:

The original SQL query retrieves the top 10 customers based on income, which may not 
fully answer the question of 'top customers' as it could be interpreted in terms of 
sales or transactions. To better answer the question, we should consider the total 
sales or transactions made by each customer.

Great! The LLM recognized that “top customers” probably means customers who spend the most, not customers who earn the most.

Second attempt (SQL V2):

SELECT C.CUST_ID, C.FIRST_NAME, C.LAST_NAME, C.EMAIL, SUM(S.ACTUAL_PRICE) AS TOTAL_SALES 
FROM CUSTOMER C 
JOIN CUSTSALES S ON C.CUST_ID = S.CUST_ID 
GROUP BY C.CUST_ID, C.FIRST_NAME, C.LAST_NAME, C.EMAIL 
ORDER BY TOTAL_SALES DESC 
FETCH FIRST 10 ROWS ONLY;

Much better! Now it’s joining with the sales data and calculating total spending per customer.

Final results:

(1234517, 'Tsubasa', 'Nakajima', '[email protected]', 2356.049999999997)
(1280887, 'Steffi', 'Bielvenstram', '[email protected]', 2334.7299999999996)
(1017254, 'Guadalupe', 'Zamora', '[email protected]', 2329.7599999999998)
...

The top customer is Tsubasa Nakajima with $2,356.05 in total sales, followed by Steffi Bielvenstram with $2,334.73, and so on. These are very different customers from the high-income list we got in the first attempt!

Natural language answer:

Our top customers, based on the provided data, are:

1. Tsubasa Nakajima - Email: [email protected], Total: $2356.05
2. Steffi Bielvenstram - Email: [email protected], Total: $2334.73
3. Guadalupe Zamora - Email: [email protected], Total: $2329.76
...

These customers have the highest total amounts associated with them.

What I learned

This reflection approach really does help. The LLM is pretty good at recognizing when its initial SQL doesn’t quite match the intent of the question – especially when it can see the actual results.

The pattern of generate → execute → reflect → regenerate is more expensive (two LLM calls instead of one for generation, plus one more for the final answer), but the quality improvement is noticeable. For production use, you might want to:

  • Cache schema information instead of fetching it every time
  • Add more sophisticated error handling for SQL errors
  • Consider running both queries in parallel and comparing results
  • Track which types of questions benefit most from reflection
  • Use the reflection feedback to build a dataset for fine-tuning

The approach is straightforward to implement and the results speak for themselves – the reflection step caught a subtle but important misinterpretation that would have given technically correct but unhelpful results.

Give it a try with your own database and questions – I think you’ll find the reflection step catches a lot of these subtle misinterpretations that would otherwise lead to valid but wrong answers.

What next? I am going to experiment with some more complex questions, and then compare the performance of a number of different LLMs to see how they go with and without reflection. Stay tuned 🙂

]]>
https://redstack.dev/2025/11/12/using-reflection-to-help-llms-write-better-sql/feed/ 0 4244
Using Multiple Datasources with Spring Boot and Spring Data JPA https://redstack.dev/2025/10/30/using-multiple-datasources-with-spring-boot-and-spring-data-jpa/ https://redstack.dev/2025/10/30/using-multiple-datasources-with-spring-boot-and-spring-data-jpa/#respond <![CDATA[Mark Nelson]]> Thu, 30 Oct 2025 22:19:45 +0000 <![CDATA[Uncategorized]]> <![CDATA[cloud]]> <![CDATA[data-source]]> <![CDATA[Java]]> <![CDATA[spring-boot]]> <![CDATA[technology]]> <![CDATA[tutorial]]> <![CDATA[UCP]]> https://redstack.dev/?p=4228 <![CDATA[Hi everyone! Today I want to show you how to configure multiple datasources in a Spring Boot application using Spring Data JPA and the Oracle Spring Boot Starter for Universal Connection Pool (UCP). This is a pattern you’ll need when … Continue reading ]]> <![CDATA[

Hi everyone! Today I want to show you how to configure multiple datasources in a Spring Boot application using Spring Data JPA and the Oracle Spring Boot Starter for Universal Connection Pool (UCP).

This is a pattern you’ll need when you have a single application that needs to connect to multiple databases. Maybe you have different domains in separate databases, or you’re working with legacy systems, or you need to separate read and write operations across different database instances. Whatever the reason, Spring Boot makes this pretty straightforward once you understand the configuration pattern.

I’ve put together a complete working example on GitHub at https://github.com/markxnelson/spring-multiple-jpa-datasources, and in this post I’ll walk you through how to build it from scratch.

The Scenario

For this example, we’re going to build a simple application that manages two separate domains:

  • Customers – stored in one database
  • Products – stored in a different database

Each domain will have its own datasource, entity manager, and transaction manager. We’ll use Spring Data JPA repositories to interact with each database, and we’ll show how to use both datasources in a REST controller.

I am assuming you have a database with two users called customer and product and some tables. Here’s the SQL to set that up:

$ sqlplus sys/Welcome12345@localhost:1521/FREEPDB1 as sysdba

alter session set container=freepdb1;
create user customer identified by Welcome12345;
create user product identified by Welcome12345;
grant connect, resource, unlimited tablespace to customer;
grant connect, resource, unlimited tablespace to product;
commit;

$ sqlplus customer/Welcome12345@localhost:1521/FREEPDB1

create table customer (id number, name varchar2(64));
insert into customer (id, name) values (1, 'mark');
commit;

$ sqlplus product/Welcome12345@localhost:1521/FREEPDB1

create table product (id number, name varchar2(64));
insert into product (id, name) values (1, 'coffee machine');
commit;

Step 1: Dependencies

Let’s start with the Maven dependencies. Here’s what you’ll need in your pom.xml:

<dependencies>
    <!-- Spring Boot Starter Web for REST endpoints -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>

    <!-- Spring Boot Starter Data JPA -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-jpa</artifactId>
    </dependency>

    <!-- Oracle Spring Boot Starter for UCP -->
    <dependency>
        <groupId>com.oracle.database.spring</groupId>
        <artifactId>oracle-spring-boot-starter-ucp</artifactId>
        <version>25.3.0</version>
    </dependency>

</dependencies>

The key dependency here is the oracle-spring-boot-starter-ucp, which provides autoconfiguration for Oracle’s Universal Connection Pool. UCP is Oracle’s high-performance connection pool implementation that provides features like connection affinity, Fast Connection Failover, and Runtime Connection Load Balancing.

Step 2: Configure the Datasources in application.yaml

Now let’s configure our two datasources in the application.yaml file. We’ll define connection properties for both the customer and product databases:

spring:
  application:
    name: demo

  jpa:
    customer:
      properties:
        hibernate.dialect: org.hibernate.dialect.OracleDialect
        hibernate.hbm2ddl.auto: validate
        hibernate.format_sql: true
        hibernate.show_sql: true
    product:
      properties:
        hibernate.dialect: org.hibernate.dialect.OracleDialect
        hibernate.hbm2ddl.auto: validate
        hibernate.format_sql: true
        hibernate.show_sql: true

  datasource:
    customer:
        url: jdbc:oracle:thin:@localhost:1521/freepdb1
        username: customer
        password: Welcome12345
        driver-class-name: oracle.jdbc.OracleDriver
        type: oracle.ucp.jdbc.PoolDataSourceImpl
        oracleucp:
          connection-factory-class-name: oracle.jdbc.pool.OracleDataSource
          connection-pool-name: CustomerConnectionPool
          initial-pool-size: 15
          min-pool-size: 10
          max-pool-size: 30
          shared: true
    product:
        url: jdbc:oracle:thin:@localhost:1521/freepdb1
        username: product
        password: Welcome12345
        driver-class-name: oracle.jdbc.OracleDriver
        type: oracle.ucp.jdbc.PoolDataSourceImpl
        oracleucp:
          connection-factory-class-name: oracle.jdbc.pool.OracleDataSource
          connection-pool-name: CustomerConnectionPool
          initial-pool-size: 15
          min-pool-size: 10
          max-pool-size: 30
          shared: true

Notice that we’re using custom property prefixes (spring.datasource.customer and product) instead of the default spring.datasource. This is because Spring Boot’s autoconfiguration will only create a single datasource by default. When you need multiple datasources, you need to create them manually and use custom configuration properties.

In this example, both datasources happen to point to the same database server but use different schemas (users). In a real-world scenario, these would typically point to completely different database instances.

Step 3: Configure the Customer Datasource

Now we need to create the configuration classes that will set up our datasources, entity managers, and transaction managers. Let’s start with the customer datasource.

Create a new package called customer and add a configuration class called CustomerDataSourceConfig.java:

package com.example.demo.customer;

import javax.sql.DataSource;

import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.orm.jpa.EntityManagerFactoryBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "customerEntityManagerFactory", transactionManagerRef = "customerTransactionManager", basePackages = {
        "com.example.demo.customer" })
public class CustomerDataSourceConfig {

    @Bean(name = "customerProperties")
    @ConfigurationProperties("spring.datasource.customer")
    public DataSourceProperties customerDataSourceProperties() {
        return new DataSourceProperties();
    }

    /**
     * Creates and configures the customer DataSource.
     *
     * @param properties the customer datasource properties
     * @return configured DataSource instance
     */
    @Primary
    @Bean(name = "customerDataSource")
    @ConfigurationProperties(prefix = "spring.datasource.customer")
    public DataSource customerDataSource(@Qualifier("customerProperties") DataSourceProperties properties) {
        return properties.initializeDataSourceBuilder().build();
    }

    /**
     * Reads customer JPA properties from application.yaml.
     *
     * @return Map of JPA properties
     */
    @Bean(name = "customerJpaProperties")
    @ConfigurationProperties("spring.jpa.customer.properties")
    public java.util.Map<String, String> customerJpaProperties() {
        return new java.util.HashMap<>();
    }

    /**
     * Creates and configures the customer EntityManagerFactory.
     *
     * @param builder the EntityManagerFactoryBuilder
     * @param dataSource the customer datasource
     * @param jpaProperties the JPA properties from application.yaml
     * @return configured LocalContainerEntityManagerFactoryBean
     */
    @Primary
    @Bean(name = "customerEntityManagerFactory")
    public LocalContainerEntityManagerFactoryBean customerEntityManagerFactory(EntityManagerFactoryBuilder builder,
            @Qualifier("customerDataSource") DataSource dataSource,
            @Qualifier("customerJpaProperties") java.util.Map<String, String> jpaProperties) {

        return builder.dataSource(dataSource)
                .packages("com.example.demo.customer")
                .persistenceUnit("customers")
                .properties(jpaProperties)
                .build();
    }

    @Bean
    @ConfigurationProperties("spring.jpa.customer")
    public PlatformTransactionManager customerTransactionManager(
            @Qualifier("customerEntityManagerFactory") LocalContainerEntityManagerFactoryBean entityManagerFactory) {
        return new JpaTransactionManager(entityManagerFactory.getObject());
    }

}

Let’s break down what’s happening here:

  1. @EnableJpaRepositories – This tells Spring Data JPA where to find the repositories for this datasource. We specify the base package (com.example.multidatasource.customer), and we reference the entity manager factory and transaction manager beans by name.
  2. @Primary – We mark the customer datasource as the primary one. This means it will be used by default when autowiring a datasource, entity manager, or transaction manager without a @Qualifier. You must have exactly one primary datasource when using multiple datasources.
  3. customerDataSource() – This creates the datasource bean using Spring Boot’s DataSourceBuilder. The @ConfigurationProperties annotation binds the properties from our application.yaml (with the customer.datasource prefix) to the datasource configuration.
  4. customerEntityManagerFactory() – This creates the JPA entity manager factory, which is responsible for creating entity managers. We configure it to scan for entities in the customer package and set up Hibernate properties.
  5. customerTransactionManager() – This creates the transaction manager for the customer datasource. The transaction manager handles transaction boundaries and ensures ACID properties.

Step 4: Configure the Product Datasource

Now let’s create the configuration for the product datasource. Create a new package called product and add ProductDataSourceConfig.java:

package com.example.demo.product;

import javax.sql.DataSource;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.orm.jpa.EntityManagerFactoryBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "productEntityManagerFactory", transactionManagerRef = "productTransactionManager", basePackages = {
        "com.example.demo.product" })
public class ProductDataSourceConfig {

    @Bean(name = "productProperties")
    @ConfigurationProperties("spring.datasource.product")
    public DataSourceProperties productDataSourceProperties() {
        return new DataSourceProperties();
    }

    @Bean(name = "productDataSource")
    @ConfigurationProperties(prefix = "spring.datasource.product")
    public DataSource productDataSource(@Qualifier("productProperties") DataSourceProperties properties) {
        return properties.initializeDataSourceBuilder().build();
    }

    /**
     * Reads product JPA properties from application.yaml.
     *
     * @return Map of JPA properties
     */
    @Bean(name = "productJpaProperties")
    @ConfigurationProperties("spring.jpa.product.properties")
    public java.util.Map<String, String> productJpaProperties() {
        return new java.util.HashMap<>();
    }

    /**
     * Creates and configures the product EntityManagerFactory.
     *
     * @param builder the EntityManagerFactoryBuilder
     * @param dataSource the product datasource
     * @param jpaProperties the JPA properties from application.yaml
     * @return configured LocalContainerEntityManagerFactoryBean
     */
    @Bean(name = "productEntityManagerFactory")
    public LocalContainerEntityManagerFactoryBean productEntityManagerFactory(@Autowired EntityManagerFactoryBuilder builder,
            @Qualifier("productDataSource") DataSource dataSource,
            @Qualifier("productJpaProperties") java.util.Map<String, String> jpaProperties) {

        return builder.dataSource(dataSource)
                .packages("com.example.demo.product")
                .persistenceUnit("products")
                .properties(jpaProperties)
                .build();
    }

    @Bean
    @ConfigurationProperties("spring.jpa.product")
    public PlatformTransactionManager productTransactionManager(
            @Qualifier("productEntityManagerFactory") LocalContainerEntityManagerFactoryBean entityManagerFactory) {
        return new JpaTransactionManager(entityManagerFactory.getObject());
    }

}

The product configuration is almost identical to the customer configuration, with a few key differences:

  1. No @Primary annotations – Since we already designated the customer datasource as primary, we don’t mark the product beans as primary.
  2. Different package – The @EnableJpaRepositories points to the product package, and the entity manager factory scans the product package for entities.
  3. Different bean names – All the beans have different names (productDataSource, productEntityManagerFactory, productTransactionManager) to avoid conflicts.

Step 5: Create the Domain Models

Now let’s create the JPA entities for each datasource. First, in the customer package, create Customer.java:

package com.example.demo.customer;

import jakarta.persistence.Entity;
import jakarta.persistence.Id;

@Entity
public class Customer {
    @Id
    public int id;
    public String name;

    public Customer() {
        this.id = 0;
        this.name = "";
    }
}

And in the product package, create Product.java:

package com.example.demo.product;

import jakarta.persistence.Entity;
import jakarta.persistence.Id;

@Entity
public class Product {
    @Id
    public int id;
    public String name;

    public Product() {
        this.id = 0;
        this.name = "";
    }
}

Step 6: Create the Repositories

Now let’s create Spring Data JPA repositories for each entity. In the customer package, create CustomerRepository.java:

package com.example.demo.customer;

import org.springframework.data.jpa.repository.JpaRepository;

public interface CustomerRepository extends JpaRepository<Customer, Integer> {

}

And in the product package, create ProductRepository.java:

package com.example.demo.product;

import org.springframework.data.jpa.repository.JpaRepository;

public interface ProductRepository extends JpaRepository<Product, Integer> {

}

Step 7: Create a REST Controller

Finally, let’s create a REST controller that demonstrates how to use both datasources. Create a controller package and add CustomerController.java:

package com.example.demo.controllers;

import java.util.List;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import com.example.demo.customer.Customer;
import com.example.demo.customer.CustomerRepository;

@RestController
public class CustomerController {

    final CustomerRepository customerRepository;

    public CustomerController(CustomerRepository customerRepository) {
        this.customerRepository = customerRepository;
    }

    @GetMapping("/customers")
    public List<Customer> getCustomers() {
        return customerRepository.findAll();
    }

}

A few important things to note about the controller:

  1. Transaction Managers – When you have multiple datasources, you need to explicitly specify which transaction manager to use. Notice the @Transactional("customerTransactionManager") and @Transactional("productTransactionManager") annotations on the write operations. If you don’t specify a transaction manager, Spring will use the primary one (customer) by default.
  2. Repository Autowiring – The repositories are autowired normally. Spring knows which datasource each repository uses based on the package they’re in, which we configured in our datasource configuration classes.
  3. Cross-datasource Operations – The initializeData() method demonstrates working with both datasources in a single method. However, note that these operations are not in a distributed transaction – if one fails, the other won’t automatically roll back. If you need distributed transactions across multiple databases, you would need to use JTA (Java Transaction API).

Let’s also create ProductController.java:

package com.example.demo.controllers;

import java.util.List;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import com.example.demo.product.Product;
import com.example.demo.product.ProductRepository;

@RestController
public class ProductController {

    final ProductRepository productRepository;

    public ProductController(ProductRepository productRepository) {
        this.productRepository = productRepository;
    }

    @GetMapping("/products")
    public List<Product> getProducts() {
        return productRepository.findAll();
    }

}

Testing the Application

Now you can run your application! Make sure you have two Oracle database users created (customer and product), or adjust the configuration to point to your specific databases.

Start the application:

mvn spring-boot:run

Then you can test it with some curl commands:

# Get all customers
$ curl http://localhost:8080/customers
[{"id":1,"name":"mark"}]

# Get all products
$ curl http://localhost:8080/products
[{"id":1,"name":"coffee machine"}]

Wrapping Up

And there you have it! We’ve successfully configured a Spring Boot application with multiple datasources using Spring Data JPA and Oracle’s Universal Connection Pool. The key points to remember are:

  1. Custom configuration properties – Use custom prefixes for each datasource in your application.yaml
  2. Manual configuration – Create configuration classes for each datasource with beans for the datasource, entity manager factory, and transaction manager
  3. Primary datasource – Designate one datasource as primary using @Primary
  4. Package organization – Keep entities and repositories for each datasource in separate packages
  5. Explicit transaction managers – Specify which transaction manager to use for write operations with @Transactional

This pattern works great when you need to connect to multiple databases, whether they’re different types of databases or different instances of the same database. Oracle’s Universal Connection Pool provides excellent performance and reliability for your database connections.

I hope this helps you work with multiple datasources in your Spring Boot applications! The complete working code is available on GitHub at https://github.com/markxnelson/spring-multiple-jpa-datasources.

Happy coding!

]]>
https://redstack.dev/2025/10/30/using-multiple-datasources-with-spring-boot-and-spring-data-jpa/feed/ 0 4228
Custom vector distance functions in Oracle (using JavaScript) https://redstack.dev/2025/10/20/custom-vector-distance-functions-and-hybrid-vector-search-in-oracle-using-javascript/ https://redstack.dev/2025/10/20/custom-vector-distance-functions-and-hybrid-vector-search-in-oracle-using-javascript/#respond <![CDATA[Mark Nelson]]> Mon, 20 Oct 2025 16:00:42 +0000 <![CDATA[Uncategorized]]> <![CDATA[ai]]> <![CDATA[artificial-intelligence]]> <![CDATA[jaccard]]> <![CDATA[javascript]]> <![CDATA[llm]]> <![CDATA[mle]]> <![CDATA[oracle]]> <![CDATA[rag]]> <![CDATA[technology]]> <![CDATA[vector-search]]> https://redstack.dev/?p=4188 <![CDATA[In case you missed it, Oracle Database 26ai was announced last week at Oracle AI World, with a heap of new AI features and capabilities like hybrid vector search, MCP server support, acceleration with NVIDIA and much more – check … Continue reading ]]> <![CDATA[

In case you missed it, Oracle Database 26ai was announced last week at Oracle AI World, with a heap of new AI features and capabilities like hybrid vector search, MCP server support, acceleration with NVIDIA and much more – check the link for details.

Of course, I wanted to check it out, and I was thinking about what to do first. I remembered this LinkedIn post from Anders Swanson about implementing custom vector distance functions in Oracle using the new JavaScript capabilities, and I thought that could be something interesting to do, so I am going to show you how to implement and use Jaccard distance for dense vector embeddings for similarity searches.

Now, this is a slightly contrived example, because I am more interested in showing you how to add a custom metric than in the actual metric itself. I chose Jaccard because the actual implementation is pretty compact.

Now, Oracle does already include Jaccard distance, but only for the BINARY data type, which is where Jaccard is mostly used. But there is a version that can be used for continuous/real-valued vectors as well (this version is for dense vectors), and that is what we will implement.

This is the formula for Jaccard similarity for continuous vectors. This is also known as the Tanimoto coefficient. It is the intersection divided by the union (or zero if the union is zero):

To get the Jaccard distance, we just subtract the Jaccard similarity from one.

Before we start, let’s look at a two-dimensional example to get a feel for how it works. Of course, the real vectors created by embedding models have many more dimensions, but it is hard for us to visualize more than two or three dimensions without also introducing techniques like dimensionality reduction and projection).

Here we have two vectors A [5 8] and B [7 4]:

The union is calculated using the max values, as you see in the formular above, so in this example it is 7×8, as shown by the area shaded pink. The intersection is calculated with the min values, so it is 5×4, as shown by the green area.

So in this example, the Jaccard similarity is (7×8) / (5×4) = 56 / 20 = 0.6

And so the Jaccard distance is 1 – 0.6 = 0.4

Ok, now that we have some intuition about how this distance metric works, let’s implement it in Oracle.

Start up an Oracle Database

First, let’s fire up Oracle Database Free 26ai in a container:

docker run -d --name db26ai \
    -p 1521:1521 \
    -e ORACLE_PWD=Welcome12345 \
    -v db26ai-volume:/opt/oracle/oradata \
    container-registry.oracle.com/database/free:latest

This will pull the latest image, which at the time of writing is 26ai (version tag 23.26.0.0). You can check the logs to see when startup is complete, you’ll see a message “DATABASE IS READY TO USE”:

docker logs -f db26ai

Let’s create a user called vector with the necessary privileges:

docker exec -i db26ai sqlplus sys/Welcome12345@localhost:1521/FREEPDB1 as sysdba <<EOF
alter session set container=FREEPDB1;
create user vector identified by vector;
grant connect, resource, unlimited tablespace, create credential, create procedure, create mle, create any index to vector;
commit;
EOF

Now you can connect with your favorite client. I am going to use Oracle SQL Developer for VS Code. See the link for install instructions.

Implement the custom distance function

Open up an SQL Worksheet, or run this in your tool of choice:


create or replace function jaccard_distance("a" vector, "b" vector)
return binary_double 
deterministic parallel_enable
as mle language javascript pure {{
    // check the vectors are the same length
    if (a.length !== b.length) {
        throw new Error('Vectors must have same length');
    }

    let intersection = 0;
    let union = 0;

    for (let i = 0; i < a.length; i++) { 
        intersection += Math.min(a[i], b[i]);
        union += Math.max(a[i], b[i]);
    }

    // handle the case where union is zero (all-zero vectors)
    if (union === 0) {
        return 0;
    }

    const similarity = intersection / union;
    return 1 - similarity;

}};
/

Let’s walk throught this. First, you see that we are creating a function called jaccard_distance which accepts two vectors (a and b) as input and returns a binary_double. This function sugnature is required for distance functions. Next we must include the deterministric keyword and we have also included the parallel_enable keyword so that this function could be used with HNSW vector indexes. For the purposes of this example, you can just ignore those or assume that they are just needed as part of the function signature.

Next you see that we mention this will be an MLE function written in JavaScript, and we added the pure keyword to let the database know that this is a pure function – meaning it has no side effects, it will not update any data, and its output will always be the same for a given set of inputs (i.e., that it is memoizable).

Then we have the actual implementation of the function. First, we check that the vectors have the same length (i.e., the same number of dimensions) which is required for this calculation to be applicable.

Then we work through the vectors and collect the minimums and maximums to calculate the intersection and the union.

Next, we check if the union is zero, and if so we return zero to handle that special case. And finally, we calculate the similarity, then subtract it from one to get the distance and return that.

Using our custom distance function

Great, so let’s test our function. We can start by creating a table t1 to store some vectors:

create table t1 (
    id number,
    v vector(2, float32)
);

And let’s add a couple of vectors, including the one we saw in the example above [5 8]:

insert into t1 (id, v) values 
(1, vector('[5, 8]')),
(2, vector('[1, 2]'));

You can so a simple select statement to see the contents of the table:

select * from t1;

This will give these results:

ID     V
1      [5.0E+000,8.0E+000]
2      [1.0E+000,2.0E+000]

Now let’s use our function to see the Jaccard distance for each vector in our table t1 from the other vector we used in the example above [7 4]:

select 
    v,
    jaccard_distance(v, vector('[7, 4]')) distance
from t1
order by distance; 

This returns these results:

V                       DISTANCE
[5.0E+000,8.0E+000]     0.4
[1.0E+000,2.0E+000]     0.7272727272727273

As you can see, the Jaccard distance from [5 8] to [7 4] is 0.4, as we calculated in the example above, and [1 2] to [7 4] is 0.72…

Let’s see how it works with large embeddings

Ok, two dimension vectors are good for simple visualization, but let’s try this out with some ‘real’ vectors.

I am using Visual Studio Code with the Python and Jupyter extensions from Microsoft installed

Create a new Jupyter Notebook using File > New File… then choose Jupyter Notebook as the type of file, and save your new file at jaccard.ipynb.

First, we need to set up the Python runtime environment. Click on the Select Kernel button (its on the top right). Select Python Environment then Create Python Environment. Select the option to create a Venv (Virtual Environment) and choose your Python interpreter. I recommend using at least Python 3.11. This will download all the necessary files and will take a minute or two.

Now, let’s install the libraries we will need – enter this into a cell and run it:

%pip install oracledb sentence-transformers

Now, connect to the same Oracle database (again, enter this into a cell and run it):

import oracledb

username = "vector"
password = "vector"
dsn = "localhost:1521/FREEPDB1"

try:
    connection = oracledb.connect(
        user=username, 
        password=password, 
        dsn=dsn)
    print("Connection successful!")
except Exception as e:
    print("Connection failed!")

Let’s create a table to hold 1024 dimension vectors that we will create with the mxbai-embed-large-v1 embedding model. Back in your SQL Worksheet, run this statement:

create table t2 (
    id number,
    v vector(1024, float32)
);

Ok, now let’s create some embeddings. Back in your notebook, create a new cell with this code:

import oracledb
from sentence_transformers import SentenceTransformer

# Initialize the embedding model
print("Loading embedding model...")
model = SentenceTransformer('mixedbread-ai/mxbai-embed-large-v1')

# Your text data
texts = [
    "The quick brown fox jumps over the lazy dog",
    "Machine learning is a subset of artificial intelligence",
    "Oracle Database 23ai supports vector embeddings",
    "Python is a popular programming language",
    "Embeddings capture semantic meaning of text"
]

# Generate embeddings
print("Generating embeddings...")
embeddings = model.encode(texts)

Let’s discuss what we are doing in this code. First, we are going to download the embedding model usign the SentenceTransformer. Then, we define a few simple texts that we can use for this example and use the embedding model to create the vector embeddings for those texts.

If you want to see what the embeddings look like, just enter “embeddings” in a cell and run it. In the output you can see the shape is 5 (rows) with 1024 dimensions and the type is float32.

Now, let’s insert the embeddings into our new table t2:

import array 
cursor = connection.cursor()

# Insert data
for i in range(len(embeddings)):
    cursor.execute("""
        INSERT INTO t2 (id, v)
        VALUES (:1, :2)
    """, [i, array.array('f', embeddings[i].tolist())])

connection.commit()
print(f"Successfully inserted {len(texts)} records")

You can take a look at the vectors using the simple query (back in your SQL Worksheet):

select * from t2

Which will show you something like this:

And, now let’s try our distance function with these vectors. Back in your notebook, run this cell. I’ve included the built-in cosine distance as well, just for comparison purposes:

query = array.array('f', model.encode("Antarctica is the driest continent").tolist())

cursor = connection.cursor()
cursor.execute("""
    select 
        id,
        jaccard_distance(v, :1),
        vector_distance(v, :2, cosine)
    from t2
    order by id
""", [query, query])

for row in cursor:
    print(f"id: {row[0]} has jaccard distance: {row[1]} and cosine distance: {row[2]}")

cursor.close()

Your output will look something like this:

id: 0 has jaccard distance: 2.0163214889484307 and cosine distance: 0.7859490566650003
id: 1 has jaccard distance: 2.0118706751976925 and cosine distance: 0.6952327173906239
id: 2 has jaccard distance: 2.0152858933816775 and cosine distance: 0.717824211314015
id: 3 has jaccard distance: 2.0216149035530537 and cosine distance: 0.6455277387099003
id: 4 has jaccard distance: 2.0132575761281766 and cosine distance: 0.6962028121886988

Well, there you go! We implemented and used a custom vector distance function. Enjoy!

]]>
https://redstack.dev/2025/10/20/custom-vector-distance-functions-and-hybrid-vector-search-in-oracle-using-javascript/feed/ 0 4188
Let’s make a simple MCP tool for Oracle AI Vector Search https://redstack.dev/2025/10/08/lets-make-a-simple-mcp-tool-for-oracle-ai-vector-search/ https://redstack.dev/2025/10/08/lets-make-a-simple-mcp-tool-for-oracle-ai-vector-search/#comments <![CDATA[Mark Nelson]]> Wed, 08 Oct 2025 20:02:10 +0000 <![CDATA[Uncategorized]]> <![CDATA[ai]]> <![CDATA[artificial-intelligence]]> <![CDATA[llm]]> <![CDATA[mcp]]> <![CDATA[rag]]> <![CDATA[technology]]> https://redstack.dev/?p=4164 <![CDATA[In this earlier post, we created a vector store in our Oracle Database 23ai and populated it with some content from Moby Dick. Since MCP is very popular these days, I thought it might be interesting to look how to … Continue reading ]]> <![CDATA[

In this earlier post, we created a vector store in our Oracle Database 23ai and populated it with some content from Moby Dick. Since MCP is very popular these days, I thought it might be interesting to look how to create a very simple MCP server to expose the similarity search as and MCP tool.

Let’s jump right into it. First we are going to need a requirements.txt file with a list of the dependencies we need:

mcp>=1.0.0
oracledb
langchain-community
langchain-huggingface
sentence-transformers
pydantic

And then go ahead and install these by running:

pip install -r requirements.txt

Note: I used Python 3.12 and a virtual environment.

Now let’s create a file called mcp_server.py and get to work! Let’s start with some imports:

import asyncio
import oracledb
from mcp.server import Server
from mcp.types import Tool, TextContent
from pydantic import BaseModel
from langchain_community.vectorstores import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_huggingface import HuggingFaceEmbeddings

And we are going to need the details of the database so we can connect to that, so let’s define some variables to hold those parameters:

# Database connection parameters for Oracle Vector Store
DB_USERNAME = "vector"
DB_PASSWORD = "vector"
DB_DSN = "localhost:1521/FREEPDB1" 
TABLE_NAME = "moby_dick_500_30"  

Note: These match the database and vector store used in the previous post.

Let’s create a function to connect to the database, and set up the embedding model and the vector store.

# Global variables for database connection and embedding model
# These are initialized once on server startup for efficiency
embedding_model = None  # HuggingFace sentence transformer model
vector_store = None     # LangChain OracleVS wrapper for vector operations
connection = None       # Oracle database connection

def initialize_db():
    """
    Initialize database connection and vector store

    This function is called once at server startup to establish:
    1. Connection to Oracle database
    2. HuggingFace embedding model (sentence-transformers/all-mpnet-base-v2)
    3. LangChain OracleVS wrapper for vector similarity operations

    The embedding model converts text queries into 768-dimensional vectors
    that can be compared against pre-computed embeddings in the database.
    """
    global embedding_model, vector_store, connection

    # Connect to Oracle database using oracledb driver
    connection = oracledb.connect(
        user=DB_USERNAME,
        password=DB_PASSWORD,
        dsn=DB_DSN
    )

    # Initialize HuggingFace embeddings model
    # This model converts text to 768-dimensional vectors
    # Same model used to create the original embeddings in the database
    embedding_model = HuggingFaceEmbeddings(
        model_name="sentence-transformers/all-mpnet-base-v2"
    )

    # Initialize vector store wrapper
    # OracleVS provides convenient interface for vector similarity operations
    vector_store = OracleVS(
        client=connection,
        table_name=TABLE_NAME,
        embedding_function=embedding_model,
        # Use cosine similarity for comparison
        distance_strategy=DistanceStrategy.COSINE,  
    )

Again, note that I am using the same embedding model that we used to create the vectors in this vector store. This is important because we need to create embedding vectors for the queries using the same model, so that similarity comparisons will be valid. It’s also important that we use the right distance strategy – for text data, cosine is generally agreed to be the best option. For performance reasons, if we had created a vector index, we’d want to use the same algorithm so the index would be used when performing the search. Oracle will default to doing an “exact search” if there is no index and the algorithm does not match.

Now, let’s add a function to perform a query in our Moby Dick vector store, we’ll include a top-k parameter so the caller can specify how many results they want:

def search_moby_dick(query: str, k: int = 4) -> list[dict]:
    """
    Perform vector similarity search on the moby_dick_500_30 table

    This function:
    1. Converts the query text to a vector using the embedding model
    2. Searches the database for the k most similar text chunks
    3. Returns results ranked by similarity (cosine distance)

    Args:
        query: The search query text (natural language)
        k: Number of results to return (default: 4)

    Returns:
        List of dictionaries containing rank, content, and metadata for each result
    """
    if vector_store is None:
        raise RuntimeError("Vector store not initialized")

    # Perform similarity search
    # The query is automatically embedded and compared against database vectors
    docs = vector_store.similarity_search(query, k=k)

    # Format results into structured dictionaries
    results = []
    for i, doc in enumerate(docs):
        results.append({
            "rank": i + 1,  # 1-indexed ranking by similarity
            "content": doc.page_content,  # The actual text chunk
            "metadata": doc.metadata  # Headers from the original HTML structure
        })

    return results

As you can see, this function returns a dictionary containing the rank, the content (chunk) and the metadata.

Ok, now let’s turn this into an MCP server! First let’s create the server instance:

# Create MCP server instance
# The server name "moby-dick-search" identifies this server in MCP client connections
app = Server("moby-dick-search")

Now we want to provide a list-tools method so that MCP clients can find out what kinds of tools this server provides. We are just going to have our search tool, so let’s define that:

@app.list_tools()
async def list_tools() -> list[Tool]:
    """
    MCP protocol handler: returns list of available tools

    Called by MCP clients to discover what capabilities this server provides.
    This server exposes a single tool: search_moby_dick

    Returns:
        List of Tool objects with names, descriptions, and input schemas
    """
    return [
        Tool(
            name="search_moby_dick",
            description="Search the Moby Dick text using vector similarity. Returns relevant passages based on semantic similarity to the query.",
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The search query text"
                    },
                    "k": {
                        "type": "integer",
                        "description": "Number of results to return (default: 4)",
                        "default": 4
                    }
                },
                "required": ["query"]
            }
        )
    ]

And now, the part we’ve all been waiting for – let’s define the actual search tool (and a class to hold the arguments)!

class SearchArgs(BaseModel):
    """
    Arguments for the vector search tool

    Attributes:
        query: The natural language search query
        k: Number of most similar results to return (default: 4)
    """
    query: str
    k: int = 4

@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
    """
    MCP protocol handler: executes tool calls

    Called when an MCP client wants to use one of the server's tools.
    Validates the tool name, parses arguments, performs the search,
    and returns formatted results.

    Args:
        name: Name of the tool to call
        arguments: Dictionary of tool arguments

    Returns:
        List of TextContent objects containing the formatted search results
    """
    # Validate tool name
    if name != "search_moby_dick":
        raise ValueError(f"Unknown tool: {name}")

    # Parse and validate arguments using Pydantic model
    args = SearchArgs(**arguments)

    # Perform the vector similarity search
    results = search_moby_dick(args.query, args.k)

    # Format response as human-readable text
    response_text = f"Found {len(results)} results for query: '{args.query}'\n\n"

    for result in results:
        response_text += f"--- Result {result['rank']} ---\n"
        response_text += f"Metadata: {result['metadata']}\n"
        response_text += f"Content: {result['content']}\n\n"

    # Return as MCP TextContent type
    return [TextContent(type="text", text=response_text)]

That was not too bad. Finally, let’s set up a main function to start up everything and handle the requests:

async def main():
    """
    Main entry point for the MCP server

    This function:
    1. Initializes the database connection and embedding model
    2. Sets up stdio transport for MCP communication
    3. Runs the server event loop to handle requests

    The server communicates via stdio (stdin/stdout), which allows
    it to be easily spawned by MCP clients as a subprocess.
    """
    # Initialize database connection and models
    initialize_db()

    # Import stdio server transport
    from mcp.server.stdio import stdio_server

    # Run the server using stdio transport
    # The server reads MCP protocol messages from stdin and writes responses to stdout
    async with stdio_server() as (read_stream, write_stream):
        await app.run(
            read_stream,
            write_stream,
            app.create_initialization_options()
        )

if __name__ == "__main__":
    asyncio.run(main())

Ok, that’s it! We can run this with the command:

python mcp_server.py

Now, to test it, we’re groing to need a client! MCP Inspector is the logical place to start, you can get it from here, or (assuming you have node installed) by just running this command:

 npx @modelcontextprotocol/inspector python3.12 mcp_server.py

That’s going to start up a UI that looks like this:

Click on the connect button, and you should see an updated screen in a few seconds that looks like this:

Go ahead and click on List Tools and you will see our Search Moby Dick Tool show up – click on it to try it out.

You should see some results like this:

There you go, it works great! And that’s a super simple, basic MCP server and tool! Enjoy.

]]>
https://redstack.dev/2025/10/08/lets-make-a-simple-mcp-tool-for-oracle-ai-vector-search/feed/ 1 4164
Exploring securing vector similarity searches with Real Application Security https://redstack.dev/2025/10/08/exploring-securing-vector-similarity-searches-with-real-application-security/ https://redstack.dev/2025/10/08/exploring-securing-vector-similarity-searches-with-real-application-security/#respond <![CDATA[Mark Nelson]]> Wed, 08 Oct 2025 17:04:40 +0000 <![CDATA[Uncategorized]]> <![CDATA[ai]]> <![CDATA[artificial-intelligence]]> <![CDATA[rag]]> <![CDATA[ras]]> <![CDATA[security]]> <![CDATA[technology]]> https://redstack.dev/?p=4018 <![CDATA[In this post, I want to explore how you can use Real Application Security to provide access controls for vectors in a vector store in Oracle Database 23ai. I’m going to use the vector store we created in the last … Continue reading ]]> <![CDATA[

In this post, I want to explore how you can use Real Application Security to provide access controls for vectors in a vector store in Oracle Database 23ai.

I’m going to use the vector store we created in the last post as an example. If you want to follow along, you should follow that one first to create and populate your vector store, then come back here.

You should have a vector store table called MOBY_DICK_500_30 that you created in that previous post. You can connect to Oracle using SQLcl or SQL*Plus or whatever tool you prefer and check the structure of that table:

SQL> describe moby_dick_500_30

Name         Null?       Type
____________ ___________ ____________________________
ID           NOT NULL    RAW(16 BYTE)
TEXT                     CLOB
METADATA                 JSON
EMBEDDING                VECTOR(768,FLOAT32,DENSE)

Let’s observe that that metadata column contains the document structure information from the loaders that we used. If we filter for Chapter 12, we can see there are 13 vectors associated with that chapter:

SQL> select metadata from moby_dick_500_30 where metadata like '%CHAPTER 12.%';

METADATA
__________________________________________________________________________________
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}
{"Header 1":"MOBY-DICK; or, THE WHALE.","Header 2":"CHAPTER 12. Biographical."}

13 rows selected.

We are going to use this metadata to filter access to the vectors.

Set up permissions

Let’s start by setting up the necessary permissions. You will need to run this as the SYS user:

alter session set container=freepdb1;
grant create session, xs_session_admin to vector;
exec sys.xs_admin_util.grant_system_privilege('provision', 'vector', sys.xs_admin_util.ptype_db);
grant create role to vector;
exec sys.xs_admin_util.grant_system_privilege('admin_sec_policy', 'vector', sys.xs_admin_util.ptype_db);
exec sys.xs_admin_util.grant_system_privilege('ADMIN_ANY_SEC_POLICY', 'vector', sys.xs_admin_util.ptype_db);

Great! Now let’s set up Real Application Security. We will run the rest of these commands as the VECTOR user.

Let’s start by creating a RAS role named role1:

exec sys.xs_principal.create_role(name => 'role1', enabled => true);

Now, we will create a user named user1 and add grant them role1 and connect privileges:

exec  sys.xs_principal.create_user(name => 'user1', schema => 'vector');
exec  sys.xs_principal.set_password('user1', 'pwd1');
exec  sys.xs_principal.grant_roles('user1', 'XSCONNECT');
exec  sys.xs_principal.grant_roles('user1', 'role1');

Let’s also create a regular database role and give it access to the vector store table:

create role db_emp;
grant select, insert, update, delete on vector.moby_dick_500_30 to db_emp; 

Grant DB_EMP to the application roles, so they have the required object privileges to access the table:

grant db_emp to role1;

Next, we want to create a security class, and include the predefined DML security class:

begin
  sys.xs_security_class.create_security_class(
    name        => 'moby_privileges',
    parent_list => xs$name_list('sys.dml'),
    priv_list   => xs$privilege_list(xs$privilege('view_moby_dick')));
end;

Now we can create an ACL (access control list) which will grant the privileges for the policy that we will define in a moment:

declare 
  aces xs$ace_list := xs$ace_list(); 
begin
  aces.extend(1);
  aces(1) := xs$ace_type(
     privilege_list => xs$name_list('select'),
     principal_name => 'USER1');
  
  sys.xs_acl.create_acl(
    name  => 'moby_acl',
    ace_list  => aces,
    sec_class => 'moby_privileges');
end;

Ok, nearly there! Finally, let’s define the security policy and apply it to the table:

declare
  realms xs$realm_constraint_list := xs$realm_constraint_list();      
begin  
  realms.extend(1);
 
  -- Filter based on column value
  realms(1) := xs$realm_constraint_type(
    realm    => 'metadata LIKE ''%CHAPTER 12.%''',
    acl_list => xs$name_list('moby_acl'));

  sys.xs_data_security.create_policy(
    name                   => 'moby_policy',
    realm_constraint_list  => realms);
    
  sys.xs_data_security.apply_object_policy(
    policy => 'moby_policy',
    schema => 'vector',
    object =>'moby_dick_500_30');
end;

Ok, that’s it!

Now, you may have noticed we did not give ourselves any permissions, so if we try to query that vector store table now, you’ll see it appears empty!

SQL> select count(*) from moby_dick_500_30;

   COUNT(*)
___________
          0

But, if we reconnect with the application user (user1) that we defined, and do the same query, we will see those 13 records for Chapter 12:

SQL> connect user1/pwd1
Connected.
SQL> select count(*) from moby_dick_500_30;

   COUNT(*)
___________
         13

So there you have it! We can define policies to easily control access to vectors. In this example we used the metadata to create the filtering rules, of course you could create whatever kind of rules you need.

This allows you to have a vector store which can be easily filtered for different users (or roles), essentially creating a virtual private vector store. You might want to allow ‘customer-support’ role access a certain subset of vectors for example, but your ‘supervisor’ role to access a larger set (or all) of the vectors.

What’s great about this, is that the security is enforced in the database itself. When an AI Assistant, chatbot, MCP client, etc., performs a vector search, they will only ever be able to get back results from the vectors that the user is allowed to see. The database will never send vectors to users which they are not allowed to see. So you don’t have to worry about trusting the LLM not to make a mistake and give out the wrong data, because it will literally never see the data in the first place.

]]>
https://redstack.dev/2025/10/08/exploring-securing-vector-similarity-searches-with-real-application-security/feed/ 0 4018
Basic Retrieval Augmented Generation with Oracle Vector Store in LangChain https://redstack.dev/2025/05/23/basic-retrieval-augmented-generation-with-oracle-vector-store-in-langchain/ https://redstack.dev/2025/05/23/basic-retrieval-augmented-generation-with-oracle-vector-store-in-langchain/#comments <![CDATA[Mark Nelson]]> Fri, 23 May 2025 23:29:15 +0000 <![CDATA[Uncategorized]]> <![CDATA[ai]]> <![CDATA[artificial-intelligence]]> <![CDATA[langchain]]> <![CDATA[llm]]> <![CDATA[python]]> <![CDATA[rag]]> <![CDATA[technology]]> https://redstack.dev/?p=4082 <![CDATA[In this earlier post, we learned how to create a Vector Store in Oracle from LangChain, the hugely popular Python library for working with Generative AI and Large Language Models. We populated it with a little data and then performed … Continue reading ]]> <![CDATA[

In this earlier post, we learned how to create a Vector Store in Oracle from LangChain, the hugely popular Python library for working with Generative AI and Large Language Models. We populated it with a little data and then performed some simple vector similarity searches.

In this post, let’s expand on that to implement basic Retrieval Augmented Generation!

First, let’s talk about some concepts – if you alread know this, feel free to jump ahead!

Generative AI – This is a type of Artificial Intelligence (AI) that uses a specialized form of machine learning model, called a Large Language Model (or “LLM”), to create (“generate”) new content based on a prompt from a user. It works by looking at the “tokens” that it received in the input (the “prompt”) and then figuring out what is the most probable next token in the sequence. What is a token? Well, it may be a word, or a part of a word, but we use the word “token” because it could also be part of an audio file or an image, since some of these models support other types of data, not just text. Note that it only generates one token at a time – you have to “run” the model again for every subsequent token.

Training – these models are “trained” by exposing them to very large amounts of data. Usually the data is publicly available information, collected from the Internet and/or other repositories. Training a model is very expensive, both in terms of time, and in terms of the cost of running the specialized GPU hardware needed to perform the training. You may see a model described as having “70 billion parameters” or something like that. Training is basically a process of tuning the probabilities of each of these parameters based on the new input.

When a model sees a prompt like “My Husky is a very good” it will use those probabilities to determine that comes next. In this example, “dog” would have a very high probability of being the next “token”.

Hyper-parameters – models also have extra parameters that control how they behave. These so-called “hyper-parameters” include things like “temperature” which controls how creative the model will be, “top-K” which controls how many options the model will consider when choose the next token, and various kinds of “frequency penalties” that will cause the model to be more or less likely to reuse/repeat tokens. Of course these are just a few examples.

Knowledge cut-off – an important property of LLMs is that they have not been exposed to any information that was created after their training ended. So a model training in 2023 would not know who won an election held in 2024 for example.

Hallucination – LLMs tend to “make up an answer” if they do not “know” the answer. Now, obviously they don’t really know anything in the same sense that we know things, they are rworking with probabilities. But if we can anthropomorphize them for a moment, they tend to “want” to be helpful, and they are very likely to offer you a very confident but completely incorrect answer if they do not have the necessary information to answer a question.


Now, of course a lot of people want to use this exciting new technology to implement solutions to help their customers or users. ChatBots is a prime example of something that is frequently implemented using Generative AI these days. But, of course no new technology is a silver bullet, and they all come with their own challenges and issues. Let’s consider some common challenges when attempting to implement a project with Generative AI:

  • Which model to use? There are many models available, and they have all beeen trained differently. Some are specialized models that are trained to perform a particular task, for example summarizing a document. Other models are general purpose and can perform different tasks. Some models understand only one language (like English) and others understand many. Some models only understand text, others only images, others video, and others still are multi-modal and understand various different types of data. Models also have different licensing requirements. Some models are provided as a service, like a utility, where you typically pay some very small amount per request. Other models are able to be self-hosted, or run on your own hardware.
  • Privacy. Very often the data that you need for your project is non-public data, and very often you do not want to share that data with a third-party organization for privacy reasons, or even regulatory reasons, depending on your industry. People are also very wary about a third-party organization using their non-public data to train future models.
  • How to “tune” to models hyper-parameters. As we discussed earlier, the hyper-parameters control how the model behaves. The settings of these parameters can have a significant impact on the quality of the results that are produced.
  • Dealing with knowledge cut-off. Giving the model access to information that is newer than when it was trained is also a key challenge. Probably the most obvious way to do this is to continue to model’s training by exposing it to this newer information. This is known as “fine-tuning”. The key challenge is that is an extremely expensive enterprise, requiring specialized GPU hardware and very highly skilled people to plan and run the training.

Enter “Retrieval Augmented Generation,” (or “RAG”) first introduced by Patrick Lewis et al, from Meta, in the 2020 paper “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks“. RAG is a technique that gives LLMs access to non-public information and/or information created after their training and is orders of magnitude less expensive than fine-tuning.

The essence of RAG is to provide the LLM with the information that it needs to answer a question by “stuffing” that information into the LLM’s “context window” or “prompt” along with the actual question. It’s a bit like an open-book test. Imagine you get a question like this:

How much does a checking account from (some bank) cost? 

And let’s assume that information is not readily available on the Internet. How would you come up with the answer? You likely could not.

But if the question was more like this:

How much does a checking acount from (some bank) cost? 

To answer this question, consult these DOCUMENTS:

(here there would be the actual content of those documents that provide the information necessary to answer that question)

Much easier right? That’s basically what RAG is. It provides the most relevant information to the LLM so that it can answer the question.

So now, the obvious questions are – where does it get this information from, and how does it know which parts are the most relevant?

This is where our Vector Store comes in!

The set of information, the non-public data that we want the LLM to use, we call that the “corpus”. Very often the corpus will be so large that there is no reasonable way for us to just give the LLM the whole thing. Now, as I am writing this in May 2025, there are models that have very large “context windows” and could be given a large amount of data. Llama 4 was just released, as I write this, and has a context window size of 10 million tokens! So you could in fact give it a large amount of information. But models that were released as recently as six or twelve months ago have much smaller context windows.

So the approach we use is to take the corpus, and split it up into small pieces, called “chunks” and we create an “embedding vector” for each of these chunks. This vector is basically an n-dimensional numerical representation of the semantic meaning of the chunk. Chunks with similar meanings will have similiar (i.e., close) vectors. Chunks with different meanings will have vectors that are futher apart.

Now, visualizing an n-dimensional vector is challenging. But if n=2, its a lot easier. So let’s do that! Remember, in real models, n is much more likely to be in the thousands or tens of thousands, but the concepts are the same. Consider the diagram below:

In this diagram, we have only two dimensions, the vertical dimension is “largeness” – how large (or small) the thing is. The horizontal dimension is “dog-ness” – how much the thing is (or is not) a dog.

Notice that both the Saint Bernard and the German Shephard (I hope I got those breeds right!) are large dogs. So the vector for both of them are high on both axes, and their vectors are very close together, because in this two-dimensional world, they are indeed very, very similar. The wolf is also large, but it is not actually a dog. Dogs are related to (descended from) wolves, so it is somewhat dog-like, but its vector is quite a distance away from the actual large dogs.

Now, look at the tennis ball! It is not large, and it is not a dog, so it’s vector is almost in the complete opposite direction to the large dogs.

Now, consider the question “Is a Husky a large dog?”

What we do in RAG, is we take that question, and turn that into a vector, using the exact same “embedding model” that we used to create those vectors we just looked at above, and then we see what other vectors are close to it.

Notice that the resulting vector, represented by the red dot, ended up very close to those two large dogs! So if we did a similarity search, that is, if we found the closest vectors to our question vector, what we would get back is the vectors for the Saint Bernard and the German Shephard.

Here’s a diagram of the RAG process:

So we take the question from the user, we turn it into a vector, we find the closest vectors to that in our corprus, and then we get the actual content that those vectors was created from and give that information to the LLM to allow it to answer the question. Remember, in real life there are many more dimensions, and they are not going to be some concept that we can neatly label, like “largeness”. The actual dimensions are things that are learned by the model over many billions of iterations of weight adjustments as it was exposed to vast amounts of data. The closest (non-mathematical) analogy I can think of is Isaac Asimov’s “positronic brain” in his Robots, Empire and Foundation series which he described as learning through countless small adjustments of uncountable numbers of weights..


Wow! That was a lot of theory! Let’s get back to some code, please!

In the previous post, we populated our vector store with just three very small quotes from Moby Dick. Now, let’s use the entire text!

Here’s the plain text version: https://www.gutenberg.org/cache/epub/2701/pg2701.txt

Here’s the same book in HTML, with some basic structure like H2 tags for the chapter headings: https://www.gutenberg.org/cache/epub/2701/pg2701-images.html

Let’s create a new notebook. If you followed along in the previous post, you can just create a new notebook in the same project and choose the same environment/kernel. If not, create a new project, then create a notebook, for example basic-rag.ipynb and create a kernel:

Click on the Select Kernel button (its on the top right). Select Python Environment then Create Python Environment. Select the option to create a Venv (Virtual Environment) and choose your Python interpreter. I recommend using at least Python 3.11. This will download all the necessary files and will take a minute or two.

If you created a new environment, install the necessary packages by creating and running a cell with this content. Note that you can run this even if you have a pre-existing environment, it won’t do any harm:

%pip install -qU "langchain[openai]"
%pip install oracledb
%pip install langchain-community langchain-huggingface

Now, create and run this cell to set your OpenAI API key:

import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
  os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")

from langchain.chat_models import init_chat_model

model = init_chat_model("gpt-4o-mini", model_provider="openai")

model.invoke("Hello, world!")

Paste your key in when prompted (see the previous post if you need to know how to get one) and confirm you got the expected response from the model.

Note: You could, of course, use a different model if you wanted to. See the LangChain model documentation for options.

Now, let’s connect to the database by creating and running this cell (this assumes that you started the database container and created the user as described in the previous post!)

import oracledb

username = "vector"
password = "vector"
dsn = "localhost:1521/FREEPDB1"

try:
    connection = oracledb.connect(
        user=username, 
        password=password, 
        dsn=dsn)
    print("Connection successful!")
except Exception as e:
    print("Connection failed!")

Ok, now we are ready to read that document and create our vector embeddings. But how? In the previous post we manually created some excerpts, but now we want to read the whole document.

Enter Document Loaders! Take a look at that page, LangChain has hundreds of different document loaders that understand all kinds of documetn formats.

Let’s try the basic web loader, create and run this cell to install it:

%pip install -qU langchain_community beautifulsoup4

Now create and run this cell to initialize the document loader:

from langchain_community.document_loaders import WebBaseLoader

loader = WebBaseLoader("https://www.gutenberg.org/cache/epub/2701/pg2701-images.html")

Now load the documents by running this cell:

docs = loader.load()

If you’d like, take a look at the result, by running this cell:

docs[0]

Well, that is just one big document, that is not so helpful, we want to split that document up into smaller chunks so we can create vectors for each smaller part. Let’s use a document splitter instead.

Install a splitter by running this cell:

%pip install -qU langchain-text-splitters

Note: Check out this page for more information about the available splitters. We are going to use the HTMLHeaderTextSplitter. Run this cell:

from langchain_text_splitters import HTMLHeaderTextSplitter

url = "https://www.gutenberg.org/cache/epub/2701/pg2701-images.html"

headers_to_split_on = [
    ("h1", "Header 1"),
    ("h2", "Header 2"),
    ("h3", "Header 3"),
    ("h4", "Header 4"),
]

html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)

html_header_splits = html_splitter.split_text_from_url(url)

Let’s see what that did, run this cell:

html_header_splits

You’ll see a long list of Documents and if you look carefully, you can see that it has maintained the structure information.

Great! That’s a lot better.

Now, let’s suppose we wanted to constrain the size of the chunks. Some of those might be too big, we might want to split them even further. We can do that with a RecursiveCharacterTextSplitter.

Let’s say we wanted chunks no bigger than 500 characters, with an overlap of 30. Now this might not be a good idea, but just for the sake of the example, let’s do it by running this cell:

from langchain_text_splitters import RecursiveCharacterTextSplitter

chunk_size = 500
chunk_overlap = 30
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=chunk_size, chunk_overlap=chunk_overlap
)

# Split
splits = text_splitter.split_documents(html_header_splits)

You can take a look at a few of the chunks by running this cell:

splits[80:85]

Ok, great! Next, we need to create our embeddings and populate our vector store.

Install the dependencies, if you have not already, by running this cell:

%pip install langchain-community langchain-huggingface

And let’s create the vector store! Run this cell:

from langchain_community.vectorstores import oraclevs
from langchain_community.vectorstores.oraclevs import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_core.documents import Document
from langchain_huggingface import HuggingFaceEmbeddings

embedding_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")

vector_store = OracleVS.from_documents(
    splits,
    embedding_model,
    client=connection,
    table_name="moby_dick_500_30",
    distance_strategy=DistanceStrategy.COSINE,
)

We are using the same model as we did in the previous post, but now we are passing in our splits that we just created – our 500 character long chunks of our larger chunks created from the HTML document respecting the document structure. And we called our vector store table moby_dick_500_30 to make it a little easier to remember what we put in there.

After that cell has finished (it might take a few minutes), you can take a look to see what is in the vector store by running this command in your terminal window:

docker exec -i db23ai sqlplus vector/vector@localhost:1521/FREEPDB1 <<EOF
select table_name from user_tables;
describe documents_cosine;
column id format a20;
column text format a30;
column metadata format a30;
column embedding format a30;
set linesize 150;
select * from moby_dick_500_30
fetch first 3 rows only;
EOF

You should get something similar to this:

Let’s try our searches again, run this cell:

query = 'Where is Rokovoko?'
print(vector_store.similarity_search(query, 1))

query2 = 'What does Ahab like to do after breakfast?'
print(vector_store.similarity_search(query2, 1))

You can change that 1 to a larger number now, since you have many more vectors, to see what you get!

Ok, now we have all the pieces we need and we are ready to implement the RAG!

The most basic way to implement RAG is to use a “retriever” – we can grab one from our vector store like this:

retriever = vector_store.as_retriever()

Try it out by asking a question:

docs = retriever.invoke("Where is Rokovoko?")

docs

You’ll get something like this:

Nearly there!

Now, we want to give the LLM a good prompt to tell it what to do, and include the retrieved documents. Let’s use a standard prompt for now:

from langchain import hub

prompt = hub.pull("rlm/rag-prompt")

example_messages = prompt.invoke(
    {"context": "(context goes here)", "question": "(question goes here)"}
).to_messages()

assert len(example_messages) == 1
print(example_messages[0].content)

The prompt looks like this:

You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
Question: (question goes here) 
Context: (context goes here) 
Answer:

Ok, now to put it all together. Now, in real life we’d probably want to use LangGraph at this point, and we’d want to think about including things like memory, ranking the results from the vector search, citations/references (“grounding” the answer), and streaming the output. But that’s all for another post! For now, let’s just do the most basic implementation:

question = "..."

retrieved_docs = vector_store.similarity_search(question)
docs_content = "\n\n".join(doc.page_content for doc in retrieved_docs)
prompt_val = prompt.invoke({"question": question, "context": docs_content})
answer = llm.invoke(prompt_val)

answer

You should get an answer similar to this:

AIMessage(content='Rokovoko is an island located far away to the West and South, as mentioned in relation to Queequeg, a native of the island. It is not found on any maps, suggesting it may be fictional.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 46, 'prompt_tokens': 402, 'total_tokens': 448, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_54eb4bd693', 'id': 'chatcmpl-BaW3g2omlCY0l6LwDkC9Ub8Ls3V88', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--86f14be2-9c8f-43c9-ae89-259db1c640bd-0', usage_metadata={'input_tokens': 402, 'output_tokens': 46, 'total_tokens': 448, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})

That’s a pretty good answer!

Well, there you go, we covered a lot of ground in this post, but that’s just a very basic RAG. Stay tuned to learn about implementing a more realistic RAG in the next post!

]]>
https://redstack.dev/2025/05/23/basic-retrieval-augmented-generation-with-oracle-vector-store-in-langchain/feed/ 2 4082
Getting started with Oracle Vector Store support in LangChain https://redstack.dev/2025/05/16/getting-started-with-oracle-vector-store-support-in-langchain/ https://redstack.dev/2025/05/16/getting-started-with-oracle-vector-store-support-in-langchain/#comments <![CDATA[Mark Nelson]]> Fri, 16 May 2025 22:00:53 +0000 <![CDATA[Uncategorized]]> <![CDATA[ai]]> <![CDATA[artificial-intelligence]]> <![CDATA[langchain]]> <![CDATA[llm]]> <![CDATA[oracle]]> <![CDATA[rag]]> <![CDATA[semantic-search]]> <![CDATA[vector-store]]> https://redstack.dev/?p=4041 <![CDATA[In this post, I would like to show you the basics of how to use the Oracle Vector Store support in LangChain. I am using Visual Studio Code with the Python and Jupyter extensions from Microsoft installed. I will show … Continue reading ]]> <![CDATA[

In this post, I would like to show you the basics of how to use the Oracle Vector Store support in LangChain. I am using Visual Studio Code with the Python and Jupyter extensions from Microsoft installed. I will show more detailed usage in future posts!

Prefer to watch a video? Check it out here:

To get started, create a new project in Visual Studio code, and then create a new Jupyter Notebook using File > New File… then choose Jupyter Notebook as the type of file, and save your new file at getting_started.ipynb,

First, we need to set up the Python runtime environment. Click on the Select Kernel button (its on the top right). Select Python Environment then Create Python Environment. Select the option to create a Venv (Virtual Environment) and choose your Python interpreter. I recommend using at least Python 3.11. This will download all the necessary files and will take a minute or two.

In this example, we will use OpenAI for our chat model. You’ll need to get an API Key from OpenAI, which you can do by logging into https://platform.openai.com/settings/organization/api-keys and creating a key. Of course you could use a different model, including a self-hosted model so that you don’t have to send your data outside your organization. I’ll cover that in future posts, stay tuned!

In the first cell, check that the type is Python and enter this code:

%pip install -qU "langchain[openai]"

Press Shift+Enter or click on the Run icon to run this code block. This will also take a minute or so to install the LangChain library for OpenAI.

Now create a second cell and paste in this code:

import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
  os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")

from langchain.chat_models import init_chat_model

model = init_chat_model("gpt-4o-mini", model_provider="openai")

Run this block, and when it prompts you for your key, paste in the key, it will start with something like sk-proj and have a long string of mostly letters and numbers after that. This will save your key in the environment so that you don’t have to keep entering it each time.

Now, we are ready to talk to the LLM! Let’s try a simple prompt. Create a new cell and enter this code:

model.invoke("Hello, world!")

Run this cell and observe the output. It should look something like this:

Great, now we are ready to connect to a vector store. If you don’t already have one, start up an instance of Oracle Database 23ai in a container on your machine. Run this command in a terminal window (not the notebook)

docker run -d --name db23ai \
  -p 1521:1521 \
  -e ORACLE_PWD=Welcome12345 \
  -v db23ai-volume:/opt/oracle/oradata \
  container-registry.oracle.com/database/free:latest

This will start up an Oracle Database 23ai Free instance in a container. It will have a PDB called FREEPDB1 and the password for PDBADMIN (and SYS and SYSTEM) will be Welcome12345.

Now, run the following command to create an Oracle user with appropriate permissions to create a vector store:

docker exec -i db23ai sqlplus sys/Welcome12345@localhost:1521/FREEPDB1 as sysdba <<EOF
alter session set container=FREEPDB1;
create user vector identified by vector;
grant connect, resource, unlimited tablespace, create credential, create procedure, create any index to vector;
commit;
EOF

Let’s connect to the database! First we’ll isntall the oracledb library. Create a new cell and enter this code:

%pip install oracledb

Run this code block to install the libary.

Now create a new code block with this code:

import oracledb

username = "vector"
password = "vector"
dsn = "localhost:1521/FREEPDB1"

try:
    connection = oracledb.connect(
        user=username, 
        password=password, 
        dsn=dsn)
    print("Connection successful!")
except Exception as e:
    print("Connection failed!")

Run this code block. You should see the output “Connection successful!”

Now, let’s install the dependencies we will need to load some documents into the vector store. Create a new cell with this code and run it:

%pip install langchain-community langchain-huggingface

Now, import the things we will need by creating a new call with this code and running it:

from langchain_community.vectorstores import oraclevs
from langchain_community.vectorstores.oraclevs import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_core.documents import Document
from langchain_huggingface import HuggingFaceEmbeddings

We are going to need some documents to load into the vector store, so let’s define some to use for an example. In real life, you’d probably want use your own non-public documents to load a vector store if you were building a chatbot or using retrieval augmented generation. Create and run a new call with this code:

documents_json_list = [
    {
        "id": "moby_dick_2701_P1",
        "text": "Queequeg was a native of Rokovoko, an island far away to the West and South. It is not down in any map; true places never are.",
        "link": "https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0012",
    },
    {
        "id": "moby_dick_2701_P2",
        "text": "It was not a great while after the affair of the pipe, that one morning shortly after breakfast, Ahab, as was his wont, ascended the cabin-gangway to the deck. There most sea-captains usually walk at that hour, as country gentlemen, after the same meal, take a few turns in the garden.",
        "link": "https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0036",
    },
    {
        "id": "moby_dick_2701_P3",
        "text": "Now, from the South and West the Pequod was drawing nigh to Formosa and the Bashee Isles, between which lies one of the tropical outlets from the China waters into the Pacific. And so Starbuck found Ahab with a general chart of the oriental archipelagoes spread before him; and another separate one representing the long eastern coasts of the Japanese islands—Niphon, Matsmai, and Sikoke. ",
        "link": "https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0109",
    },
]

Now, let’s load them into a LangChain documents list with some metadata. Create and run a cell with this code:

# Create Langchain Documents

documents_langchain = []

for doc in documents_json_list:
    metadata = {"id": doc["id"], "link": doc["link"]}
    doc_langchain = Document(page_content=doc["text"], metadata=metadata)
    documents_langchain.append(doc_langchain)

Ok, great. Now we can create a vector store and load those documents. Create and run a cell with this code:

model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")

vector_store = OracleVS.from_documents(
    documents_langchain,
    model,
    client=connection,
    table_name="Documents_COSINE",
    distance_strategy=DistanceStrategy.COSINE,
)

Let’s have a look in the database and see what was created. Run this code in your terminal:

docker exec -i db23ai sqlplus vector/vector@localhost:1521/FREEPDB1 <<EOF
select table_name from user_tables;
describe documents_cosine;
column id format a20;
column text format a30;
column metadata format a30;
column embedding format a30;
set linesize 150;
select * from documents_cosine;
EOF

You should see output similar to this:

SQL>
TABLE_NAME
--------------------------------------------------------------------------------
DOCUMENTS_COSINE

SQL>  Name                                         Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL RAW(16)
 TEXT                                               CLOB
 METADATA                                           JSON
 EMBEDDING                                          VECTOR(768, FLOAT32)

SQL> SQL> SQL> SQL> SQL> SQL>
ID                   TEXT                           METADATA                       EMBEDDING
-------------------- ------------------------------ ------------------------------ ------------------------------
957B602A0B55C487     Now, from the South and West t {"id":"moby_dick_2701_P3","lin [9.29364376E-003,-5.70030287E-
                     he Pequod was drawing nigh to  k":"https://www.gutenberg.org/ 002,-4.62282933E-002,-1.599499
                     Formosa and the Bash           cache/epub/2701/pg27           58E-002,

A8A71597D56432FD     Queequeg was a native of Rokov {"id":"moby_dick_2701_P1","lin [4.28722538E-002,-8.80071707E-
                     oko, an island far away to the k":"https://www.gutenberg.org/ 003,3.56001826E-003,6.765306E-
                      West and South. It            cache/epub/2701/pg27           003,

E7675836CF07A695     It was not a great while after {"id":"moby_dick_2701_P2","lin [1.06763924E-002,3.91203648E-0
                      the affair of the pipe, that  k":"https://www.gutenberg.org/ 04,-1.01576066E-002,-3.5316135
                     one morning shortly            cache/epub/2701/pg27           7E-002,

Now, let’s do a vector similarity search. Create and run a cell with this code:

query = 'Where is Rokovoko?'
print(vector_store.similarity_search(query, 1))

query2 = 'What does Ahab like to do after breakfast?'
print(vector_store.similarity_search(query2, 1))

This will find the one (1) nearest match in each case. You should get an answer like this:

[Document(metadata={'id': 'moby_dick_2701_P1', 'link': 'https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0012'}, page_content='Queequeg was a native of Rokovoko, an island far away to the West and South. It is not down in any map; true places never are.')]

[Document(metadata={'id': 'moby_dick_2701_P2', 'link': 'https://www.gutenberg.org/cache/epub/2701/pg2701-images.html#link2HCH0036'}, page_content='It was not a great while after the affair of the pipe, that one morning shortly after breakfast, Ahab, as was his wont, ascended the cabin-gangway to the deck. There most sea-captains usually walk at that hour, as country gentlemen, after the same meal, take a few turns in the garden.')]

Well, there you go, that’s the most basic example of creating a vector store, loading some documents into it and doing a simple similarity search. Stay tuned to learn about more advanced features!

]]>
https://redstack.dev/2025/05/16/getting-started-with-oracle-vector-store-support-in-langchain/feed/ 1 4041
Running Oracle Autonomous Database in a container https://redstack.dev/2025/04/25/running-oracle-autonomous-database-in-a-container/ https://redstack.dev/2025/04/25/running-oracle-autonomous-database-in-a-container/#respond <![CDATA[Mark Nelson]]> Fri, 25 Apr 2025 14:51:04 +0000 <![CDATA[Uncategorized]]> <![CDATA[ADB]]> <![CDATA[autonomous]]> <![CDATA[container]]> <![CDATA[database]]> <![CDATA[oracle]]> <![CDATA[oracle-database]]> <![CDATA[ORDS]]> <![CDATA[sql]]> https://redstack.dev/?p=4020 <![CDATA[Did you know that you can easily run Oracle Autonomous Database in a container on your local machine? This is a great for development. It’s totally free, and you don’t even need to authenticate to pull the image. It also … Continue reading ]]> <![CDATA[

Did you know that you can easily run Oracle Autonomous Database in a container on your local machine? This is a great for development. It’s totally free, and you don’t even need to authenticate to pull the image. It also includes Oracle REST Data Services, APEX, Database Actions and the MongoDB API, so you get a nice built-in browser-based UI to work with your database. The free version does have a 20GB limit on database size, but for development purposes, that’s fine.

Prefer to watch a video? Watch this content on YouTube instead

To start up a database, you can use this command, just replace the “xxxxxx”s with proper passwords. Note that the volume is needed so data will be persisted across container restarts, if you leave that out, you’ll get a new empty database every time you restart the container:

docker run -d \
  -p 1521:1522 \
  -p 1522:1522 \
  -p 8443:8443 \
  -p 27017:27017 \
  -e WALLET_PASSWORD=xxxxxx \
  -e ADMIN_PASSWORD=xxxxxx \
  --cap-add SYS_ADMIN \
  --device /dev/fuse \
  --volume adb-free-volume:/data \
  --name adb-free \
  container-registry.oracle.com/database/adb-free:latest-23ai

The ports listed are for the following access methods:

  • 1521 TLS
  • 1522 mTLS
  • 8443 HTTPS port for ORDS, APEX and Database Actions
  • 27017 MongoDB API

Once the database has started up, you can access the web UI using these URLs:

Here’s what the Database Actions login page looks like, you can log in with the user “admin” and the password you specified:

When you sign in, you will see the launchpad, from where you can access various tools:

For example, you could open the SQL tool and try executing a statement:

You may also want to connect to your database using other tools like Oracle SQL Developer (which is a Visual Studio Code extension) or SQLcl (which is a command line tool), or from a program. To do this, you will probably want to grab the wallet, read on!

Connecting the the database

If you want to use mTLS, you can get the wallet by copying it from the image using this command, just provide the desired destination path in the last argument:

docker cp adb-free:/u01/app/oracle/wallets/tls_wallet /path/to/wallet

Note that the address will be ‘localhost’ in the tnsnames.ora, so you will need to update that if necessary.

To use the wallet, set your TNS_ADMIN environment variable:

export TNS_ADMIN=/path/to/wallet

The following TNS aliases are provided, for mTLS:

  • myatp_medium
  • myatp_high
  • myatp_low
  • myatp_tp
  • myatp_tpurgent

And for TLS:

  • myatp_medium_tls
  • myatp_high_tls
  • myatp_low_tls
  • myatp_tp_tls
  • myatp_tpurgent_tls

Here’s an example of connecting with SQLcl:

$ TNS_ADMIN=/path/to/wallet sql admin/xxxxxx@myatp_high

SQLcl: Release 24.1 Production on Fri Apr 25 10:41:46 2025

Copyright (c) 1982, 2025, Oracle.  All rights reserved.

Last Successful login time: Fri Apr 25 2025 10:41:48 -04:00

Connected to:
Oracle Database 23ai Enterprise Edition Release 23.0.0.0.0 - Production
Version 23.6.0.24.11

SQL> select sysdate;

SYSDATE
____________
25-APR-25

Here’s an example of connecting from SQL Developer. When you create the connection, just choose the location of the wallet (tnsnames.ora file) and it will let you select the TNS name to connect to:

Enjoy!

]]>
https://redstack.dev/2025/04/25/running-oracle-autonomous-database-in-a-container/feed/ 0 4020
How to read the content of a JMS message using PL/SQL https://redstack.dev/2023/05/26/how-to-read-the-content-of-a-jms-message-using-pl-sql/ https://redstack.dev/2023/05/26/how-to-read-the-content-of-a-jms-message-using-pl-sql/#respond <![CDATA[Mark Nelson]]> Fri, 26 May 2023 17:50:57 +0000 <![CDATA[Uncategorized]]> <![CDATA[JMS]]> <![CDATA[oracle]]> <![CDATA[springboot]]> <![CDATA[TEQ]]> <![CDATA[txeventq]]> https://redstack.dev/?p=4000 <![CDATA[This is just a short post – but all the details are in this post from Rob Van Wijk. Today I wanted to read the contents of a JMS Text Message sitting in a queue. I wrote a Spring Boot … Continue reading ]]> <![CDATA[

This is just a short post – but all the details are in this post from Rob Van Wijk.

Today I wanted to read the contents of a JMS Text Message sitting in a queue. I wrote a Spring Boot micrsoervice that sends a message, and I have not written the one that recieves and processes the message yet, so I wanted to look at the message on the queue to check it was correct.

So I went and did a good old “select user_data from deposits_qt” and stared at the answer: “Object”. Hmmm, not what I wanted.

After a quick bit of Googling, I found Rob’s post which told me exactly what I needed to know. Yay! Thanks Rob!

Then I changed my query to this:

select qt.user_data.text_vc from account.deposits_qt qt;

And I got exactly what I needed:

{"accountId":2,"amount":200}

Fantastic! Thnaks a lot Rob!

]]>
https://redstack.dev/2023/05/26/how-to-read-the-content-of-a-jms-message-using-pl-sql/feed/ 0 4000
Vote for my session at VMWare Explore (Spring track)! https://redstack.dev/2023/05/16/vote-for-my-session-at-vmware-explore-spring-track/ https://redstack.dev/2023/05/16/vote-for-my-session-at-vmware-explore-spring-track/#respond <![CDATA[Mark Nelson]]> Tue, 16 May 2023 13:41:05 +0000 <![CDATA[Uncategorized]]> <![CDATA[Spring]]> <![CDATA[springboot]]> <![CDATA[vmwareexplore]]> https://redstack.dev/?p=3995 <![CDATA[Vote for my People’s Choice Session “Experiences and lessons learnt building a multi-cloud #SpringBoot backend (ID 2002)” to be featured at #VMwareExplore 2023 Las Vegas! Place your vote by May 26: https://lnkd.in/eiRi-YF7 Register for VMWare Explore here. Learn more about Oracle Backend for Spring Boot … Continue reading ]]> <![CDATA[

Vote for my People’s Choice Session “Experiences and lessons learnt building a multi-cloud #SpringBoot backend (ID 2002)” to be featured at #VMwareExplore 2023 Las Vegas! Place your vote by May 26: https://lnkd.in/eiRi-YF7

Register for VMWare Explore here. Learn more about Oracle Backend for Spring Boot here.

]]>
https://redstack.dev/2023/05/16/vote-for-my-session-at-vmware-explore-spring-track/feed/ 0 3995
Start up an Oracle Database in Kubernetes with Oracle REST Data Services and Database Actions in no time at all! https://redstack.dev/2023/05/01/start-up-an-oracle-database-in-kubernetes-with-oracle-rest-data-services-and-database-actions-in-no-time-at-all/ https://redstack.dev/2023/05/01/start-up-an-oracle-database-in-kubernetes-with-oracle-rest-data-services-and-database-actions-in-no-time-at-all/#respond <![CDATA[Mark Nelson]]> Mon, 01 May 2023 19:33:11 +0000 <![CDATA[Uncategorized]]> <![CDATA[database]]> <![CDATA[database actions]]> <![CDATA[kubernetes]]> <![CDATA[operator]]> <![CDATA[oracle]]> <![CDATA[ORDS]]> <![CDATA[REST]]> <![CDATA[sql-developer]]> https://redstack.dev/?p=3962 <![CDATA[Hi everyone! Today I want to show you how easy it is to get an instance of Oracle up and running in Kubernetes, with Oracle REST Data Services and Database Actions using the Oracle Database Operator for Kubernetes Let’s assume … Continue reading ]]> <![CDATA[

Hi everyone! Today I want to show you how easy it is to get an instance of Oracle up and running in Kubernetes, with Oracle REST Data Services and Database Actions using the Oracle Database Operator for Kubernetes

Let’s assume you have a Kubernetes cluster running and you have configured kubectl access to the cluster.

The first step is to install Cert Manager, which is a pre-requisite for the Oracle Database Operator:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml

It will take probably less than a minute to start up. You can check on it with this command:

kubectl -n cert-manager get pods
NAME                                      READY   STATUS    RESTARTS   AGE
cert-manager-8f49b54c8-xxd5v              1/1     Running   0          7d5h
cert-manager-cainjector-678548868-x5ljp   1/1     Running   0          7d5h
cert-manager-webhook-898d9d956-57m76      1/1     Running   0          7d5h

Next, install the Oracle Database Operator itself:

kubectl apply -f https://raw.githubusercontent.com/oracle/oracle-database-operator/main/oracle-database-operator.yaml

That will start up pretty quickly too, and you can check with this command:

kubectl -n oracle-database-operator-system get pods 

Let’s create a Single Instance Database. The Oracle Database Operator will let you create other types of databases too, including sharded and multitenant databases, and to manage cloud database instances like Autonomous Database and Database Cloud Service. But today, I’m going to stick with a simple single instance.

Here’s the Kubernetes YAML file to describe the database we want, I called this sidb.yaml:

apiVersion: database.oracle.com/v1alpha1
kind: SingleInstanceDatabase
metadata:
  name: sidb-sample
  namespace: default
spec:
  sid: ORCL1
  edition: enterprise
  adminPassword:
    secretName: db-admin-secret
    secretKey: oracle_pwd
    keepSecret: true
  charset: AL32UTF8
  pdbName: orclpdb1
  flashBack: false
  archiveLog: false
  forceLog: false
  enableTCPS: false
  tcpsCertRenewInterval: 8760h
  image:
    pullFrom: container-registry.oracle.com/database/enterprise:latest
    pullSecrets: oracle-container-registry-secret
  persistence:
    size: 100Gi
    storageClass: "oci-bv"
    accessMode: "ReadWriteOnce"
  loadBalancer: false
  serviceAccountName: default
  replicas: 1

If you have not before, head over to Oracle Container Registry and go to the Database group, and accept the license agreement for the Enterprise option. You’ll also want to create a Kubernetes secret with your credentials so it can pull the image:

kubectl create secret docker-registry oracle-container-registry-secret \
  --docker-server=container-registry.oracle.com \
  --docker-username='[email protected]' \
  --docker-password='whatever' \
  --docker-email='[email protected]'

You will want to change the storageClass to match your cluster. I am using Oracle Container Engine for Kuberentes in this example, so I used the “oci-bv” storage class. If you are using a different flavor of Kubernetes you should check what storage classes are available and use one of them.

This YAML describes a databse with the SID ORCL1 and a PDB called orclpdb1. It will get the password for sys, pdbadmin, etc., from a Kubernetes secret – so let’s create that:

kubectl create secret generic db-admin-secret --from-literal=oracle_pwd=Welcome12345

Now we can create the database by applying that YAML file to our cluster:

kubectl apply -f sidb.yaml

It will take few minutes to start up fully – it has to pull the image (which took 3m30s on my cluster, for the “enterprise” image which is the biggest one), create the database instance the first time (mine took 8m), and apply any patches that are required (just over 1m for me). Subsequent startups will be much faster of course (I stopped it by scaling to zero replicas, then started it again by scaling back to one replica and it reached ready/healthy status in about 90s). For reference, my cluster had two nodes each with one OCPU and 16 GB of RAM. You can check on the progress with this command:

kubectl get singleinstancedatabases -o wide -w

As the database starts up, you will see the connection string and other fields populate in the output.

Now, let’s add Oracle REST Data Services. Here’s a Kubernetes YAML file that describes what we want, I called this ords.yaml:

apiVersion: database.oracle.com/v1alpha1
kind: OracleRestDataService
metadata:
  name: ords-sample
  namespace: default
spec:
  databaseRef: "sidb-sample"
  adminPassword:
    secretName: db-admin-secret
  ordsPassword:
    secretName: ords-secret
  image:
    pullFrom: container-registry.oracle.com/database/ords:21.4.2-gh
  restEnableSchemas:
  - schemaName: mark
    enable: true
    urlMapping: mark

You’ll need to create a secret to hold the password, for example:

kubectl create secret generic ords-secret --from-literal=oracle_pwd=Welcome12345

You can apply that to your cluster with this command:

kubectl apply -f ords.yaml

And we can check on progress with this command:

kubectl get oraclerestdataservice -w

As it becomes ready, you will see the URLs for the Database API REST endpoint and for Database Actions. Mine took about 2m to reach ready/healthy status.

If your nodes are on a private network, the quickest way to access the REST APIs and Database Actions is to use a port forward. You can get the name of the ORDS pod and start a port forwarding session with commands like this:

kubectl get pods
kubectl port-forward pod/ords-sample-g4wc7 8443

Now you can hit the Database API REST endpoint with curl:

curl -k  https://localhost:8443/ords/orclpdb1/_/db-api/stable/
{"links":[{"rel":"self","href":"https://localhost:8443/ords/orclpdb1/_/db-api/stable/"},{"rel":"describedby","href":"https://localhost:8443/ords/orclpdb1/_/db-api/stable/metadata-catalog/"}]}

And you can access Database Actions at this address: http://localhost:8443/ords/sql-developer

On the login page, enter ORCLPDB1 for the PDB Name and mark as the user. Then on the password page enter Welcome12345, and you are good to go!

While we are at it, let’s also get SQLcl access to the database.

Again, we can use port forwarding to access the database from outside the cluster:

 kubectl port-forward svc/sidb-sample 1521 &

And then connect from SQLcl (if you have not checked out SQLcl yet, you should, it’s got cool features like command line completion and history):

sql mark/Welcome12345@//localhost:1521/orclpdb1


SQLcl: Release 22.2 Production on Mon May 01 14:32:57 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

Last Successful login time: Mon May 01 2023 14:32:56 -04:00

Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL> select * from dual;

DUMMY
________
X

SQL>

There you go! That was super quick and easy! Enjoy!

]]>
https://redstack.dev/2023/05/01/start-up-an-oracle-database-in-kubernetes-with-oracle-rest-data-services-and-database-actions-in-no-time-at-all/feed/ 0 3962
New 23c version of Kafka-compatible Java APIs for Transactional Event Queues published https://redstack.dev/2023/05/01/new-23c-version-of-kafka-compatible-java-apis-for-transactional-event-queues-published/ https://redstack.dev/2023/05/01/new-23c-version-of-kafka-compatible-java-apis-for-transactional-event-queues-published/#respond <![CDATA[Mark Nelson]]> Mon, 01 May 2023 15:59:34 +0000 <![CDATA[Uncategorized]]> <![CDATA[database]]> <![CDATA[Java]]> <![CDATA[kafka]]> <![CDATA[oracle]]> <![CDATA[txeventq]]> https://redstack.dev/?p=3931 <![CDATA[We just published the new 23c version of the Kafka-compatible Java APIs for Transactional Event Queues in Maven Central, and I wanted to show you how to use them! If you are not familiar with these APIs – they basically … Continue reading ]]> <![CDATA[

We just published the new 23c version of the Kafka-compatible Java APIs for Transactional Event Queues in Maven Central, and I wanted to show you how to use them! If you are not familiar with these APIs – they basically allow you to use the standard Kafka Java API with Transactaional Event Queues acting as the Kafka broker. The only things that you would need to change are the broker address, and you need to use the Oracle versions of KafkaProducer and KafkaConsumer – other than that, your existing Kafka Java code should just work!

We also published updated source and sink Kafka connectors for Transactional Event Queues – but I’ll cover those in a separate post.

Let’s build a Kafka producer and consumer using the updated Kafka-compatible APIs.

Prepare the database

The first thing we want to do is start up the Oracle 23c Free Database. This is very easy to do in a container using a command like this:

docker run --name free23c -d -p 1521:1521 -e ORACLE_PWD=Welcome12345 container-registry.oracle.com/database/free:latest

This will pull the image and start up the database with a listener on port 1521. It will also create a pluggable database (a database container) called “FREEPDB1” and will set the admin passwords to the password you specified on this command.

You can tail the logs to see when the database is ready to use:

docker logs -f free23c

(look for this message...)
#########################
DATABASE IS READY TO USE!
#########################

Also, grab the IP address of the container, we’ll need that to connect to the database:

docker inspect free23c | grep IPA
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.2",
                    "IPAMConfig": null,
                    "IPAddress": "172.17.0.2",

To set up the necessary permissions, you’ll need to connect to the database with a client. If you don’t have one already, I’d recommend trying the new SQLcl CLI which you can download here. Start it up and connect to the database like this (note that your IP address and password may be different):

sql sys/Welcome12345@//172.17.0.2:1521/freepdb1 as sysdba


SQLcl: Release 22.2 Production on Tue Apr 11 12:36:24 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

SQL>

Now, run these commands to create a user called “mark” and give it the necessary privileges:


SQL> create user mark identified by Welcome12345;

User MARK created.

SQL> grant resource, connect, unlimited tablespace to mark;

Grant succeeded.

SQL> grant execute on dbms_aq to mark;

Grant succeeded.

SQL> grant execute on dbms_aqadm to mark;

Grant succeeded.

SQL> grant execute on dbms_aqin to mark;

Grant succeeded.

SQL> grant execute on dbms_aqjms_internal to mark;

Grant succeeded.

SQL> grant execute on dbms_teqk to mark;

Grant succeeded.

SQL> grant execute on DBMS_RESOURCE_MANAGER to mark;

Grant succeeded.

SQL> grant select_catalog_role to mark;

Grant succeeded.

SQL> grant select on sys.aq$_queue_shards to mark;

Grant succeeded.

SQL> grant select on user_queue_partition_assignment_table to mark;

Grant succeeded.

SQL> exec  dbms_teqk.AQ$_GRANT_PRIV_FOR_REPL('MARK');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> quit;

Create a Kafka topic and consumer group using these statements. Note that you could also do this from the Java code, or using the Kafka-compatible Transactional Event Queues REST API (which I wrote about in this post):

begin
  -- Creates a topic named TEQ with 5 partitions and 7 days of retention time
  dbms_teqk.aq$_create_kafka_topic('TEQ', 5); 
  -- Creates a Consumer Group CG1 for Topic TEQ
  dbms_aqadm.add_subscriber('TEQ', subscriber => sys.aq$_agent('CG1', null, null));
end;
/

You should note that the dbms_teqk package is likely to be renamed in the GA release of Oracle Database 23c, but for the Oracle Database 23c Free – Developer Release you can use it.

Ok, we are ready to start on our Java code!

Create a Java project

Let’s create a Maven POM file (pom.xml) and add the dependencies we need for this application. I’ve also iunclude some profiles to make it easy to run the two main entry points we will create – the producer, and the consumer. Here’s the content for the pom.xml. Note that I have excluded the osdt_core and osdt_cert transitive dependencies, since we are not using a wallet or SSL in this example, so we do not need those libraries:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.example</groupId>
    <artifactId>okafka-demo</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>okafka-demo</name>

    <description>OKafka demo</description>

    <properties>
        <java.version>17</java.version>
        <maven.compiler.target>17</maven.compiler.target>
        <maven.compiler.source>17</maven.compiler.source>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.oracle.database.messaging</groupId>
            <artifactId>okafka</artifactId>
            <version>23.2.0.0</version>
            <exclusions>
                <exclusion>
                    <artifactId>osdt_core</artifactId>
                    <groupId>com.oracle.database.security</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>osdt_cert</artifactId>
                    <groupId>com.oracle.database.security</groupId>
                </exclusion>
            </exclusions>
        </dependency>
    </dependencies>

    <profiles>
        <profile>
            <id>consumer</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.example.SimpleConsumerOKafka</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
        <profile>
            <id>producer</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.example.SimpleProducerOKafka</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
    </profiles>

</project>

This is a pretty straightforward POM. I just set the project’s coordinates, declared my one dependency, and then created the two profiles so I can run the code easily.

Next, we are going to need a file called ojdbc.properties in the same directory as the POM with this content:

user=mark
password=Welcome12345

The KafkaProducer and KafkaConsumer will use this to connect to the database.

Create the consumer

Ok, now let’s create our consumer. In a directory called src/main/jaba/com/example, create a new Java file called SimpleConsumerOKafka.java with the following content:

package com.example;

import java.util.Properties;
import java.time.Duration;
import java.util.Arrays;

import org.oracle.okafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;

public class SimpleConsumerOKafka {
  public static void main(String[] args) {
    // set the required properties
    Properties props = new Properties();
    props.put("bootstrap.servers", "172.17.0.2:1521");
    props.put("group.id" , "CG1");
    props.put("enable.auto.commit","false");
    props.put("max.poll.records", 100);

    props.put("key.deserializer", 
      "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", 
      "org.apache.kafka.common.serialization.StringDeserializer");

    props.put("oracle.service.name", "freepdb1");
    props.put("oracle.net.tns_admin", "."); 
    props.put("security.protocol","PLAINTEXT");

    // create the consumer
    Consumer<String , String> consumer = new KafkaConsumer<String, String>(props);		
    consumer.subscribe(Arrays.asList("TEQ"));
 
    int expectedMsgCnt = 4000;
    int msgCnt = 0;
    long startTime = 0;

    // consume messages
    try {
      startTime = System.currentTimeMillis();
      while(true) {
        try {
          ConsumerRecords <String, String> records = 
            consumer.poll(Duration.ofMillis(10_000));

          for (ConsumerRecord<String, String> record : records) {
            System.out.printf("partition = %d, offset = %d, key = %s, value = %s\n ", 
              record.partition(), record.offset(), record.key(), record.value());
            for(Header h: record.headers()) {
              System.out.println("Header: " + h.toString());
            }
          }

          // commit the records we received
          if (records != null && records.count() > 0) {
            msgCnt += records.count();
            System.out.println("Committing records " + records.count());
            try {
              consumer.commitSync();
            } catch(Exception e) {
              System.out.println("Exception in commit " + e.getMessage());
              continue;
            }

            // if we got all the messages we expected, then exit
            if (msgCnt >= expectedMsgCnt ) {
              System.out.println("Received " + msgCnt + ". Expected " +
               expectedMsgCnt +". Exiting Now.");
              break;
            }
          } else {
            System.out.println("No records fetched. Retrying...");
            Thread.sleep(1000);
          }
        } catch(Exception e) {
          System.out.println("Inner Exception " + e.getMessage());
          throw e;
        }			
      }
    } catch(Exception e) {
      System.out.println("Exception from consumer " + e);
      e.printStackTrace();
    } finally {
      long runDuration = System.currentTimeMillis() - startTime;
      System.out.println("Application closing Consumer. Run duration " + 
        runDuration + " ms");
      consumer.close();
    }
  }
}

Let’s walk through this code together.

The first thing we do is prepare the properties for the KafkaConsumer. This is fairly standard, though notice that the bootstrap.servers property contains the address of your database listener:

    Properties props = new Properties();
    props.put("bootstrap.servers", "172.17.0.2:1521");
    props.put("group.id" , "CG1");
    props.put("enable.auto.commit","false");
    props.put("max.poll.records", 100);

    props.put("key.deserializer", 
      "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", 
      "org.apache.kafka.common.serialization.StringDeserializer");

Then, we add some Oracle-specific properties – oracle.service.name is the name of the service we are connecting to, in our case this is freepdb1; oracle.net.tns_admin needs to point to the directory where we put our ojdbc.properties file; and security.protocol controls whether we are using SSL, or not, as in this case:

    props.put("oracle.service.name", "freepdb1");
    props.put("oracle.net.tns_admin", "."); 
    props.put("security.protocol","PLAINTEXT");

With that done, we can create the KafkaConsumer and subscribe to a topic. Note that we use the Oracle version of KafkaConsumer which is basically just a wrapper that understand those extra Oracle-specific properites:

import org.oracle.okafka.clients.consumer.KafkaConsumer;

// ...

    Consumer<String , String> consumer = new KafkaConsumer<String, String>(props);		
    consumer.subscribe(Arrays.asList("TEQ"));

The rest of the code is standard Kafka code that polls for records, prints out any it finds, commits them, and then loops until it has received the number of records it expected and then exits.

Run the consumer

We can build and run the consumer with this command:

mvn exec:exec -P consumer

It will connect to the database and start polling for records, of course there won’t be any yet because we have not created the producer. It should output a message like this about every ten seconds:

No records fetched. Retrying...

Let’s write that producer!

Create the producer

In a directory called src/main/jaba/com/example, create a new Java file called SimpleProducerOKafka.java with the following content:

package com.example;

import org.oracle.okafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.header.internals.RecordHeader;

import java.util.Properties;
import java.util.concurrent.Future;

public class SimpleProducerOKafka {
  public static void main(String[] args) {
    long startTime = 0;
    try {
      // set the required properties
      Properties props = new Properties();
      props.put("bootstrap.servers", "172.17.0.2:1521");
      props.put("key.serializer", 
        "org.apache.kafka.common.serialization.StringSerializer");
      props.put("value.serializer", 
        "org.apache.kafka.common.serialization.StringSerializer");
      props.put("batch.size", "5000");
      props.put("linger.ms","500");

      props.put("oracle.service.name", "freepdb1");
      props.put("oracle.net.tns_admin", ".");
      props.put("security.protocol","PLAINTEXT");

      // create the producer
      Producer<String, String> producer = new KafkaProducer<String, String>(props);

      Future<RecordMetadata> lastFuture = null;
      int msgCnt = 4000;
      startTime = System.currentTimeMillis();

      // send the messages
      for (int i = 0; i < msgCnt; i++) {
        RecordHeader rH1 = new RecordHeader("CLIENT_ID", "FIRST_CLIENT".getBytes());
        RecordHeader rH2 = new RecordHeader("REPLY_TO", "TOPIC_M5".getBytes());
				
        ProducerRecord<String, String> producerRecord = 
          new ProducerRecord<String, String>(
            "TEQ", String.valueOf(i), "Test message "+ i
          );
        producerRecord.headers().add(rH1).add(rH2);
				
        lastFuture = producer.send(producerRecord);
      }
			
      // wait for the last one to finish
      lastFuture.get();

      // print summary
      long runTime = System.currentTimeMillis() - startTime;
      System.out.println("Produced "+ msgCnt +" messages in " + runTime + "ms.");
      producer.close();
    }		
    catch(Exception e) {
      System.out.println("Caught exception: " + e );
      e.printStackTrace();
    }
  }
}

This code is quite similar to the consumer. We first set up the Kafka properties, including the Oracle-specific ones. Then we create a KafkaProducer, again using the Oracle version which understands those extra properties. After that we just loop and produce the desired number of records.

Make sure your consumer is still running (or restart it) and then build and run the producer with this command:

mvn exec:exec -P producer

When you do this, it will run for a short time and then print a message like this to let you know it is done:

Produced 4000 messages in 1955ms.

Now take a look at the output in the consumer window. You should see quite a lot of output there. Here’s a short snippet from the end:

partition = 0, offset = 23047, key = 3998, value = Test message 3998
 Header: RecordHeader(key = CLIENT_ID, value = [70, 73, 82, 83, 84, 95, 67, 76, 73, 69, 78, 84])
 Header: RecordHeader(key = REPLY_TO, value = [84, 79, 80, 73, 67, 95, 77, 53])
Committing records 27
Received 4000. Expected 4000. Exiting Now.
Application closing Consumer. Run duration 510201 ms

It prints out a message for each record it finds, including the partition ID, the offset, and the key and value. It them prints out the headers. You will also see commit messages, and at the end it prints out how many records it found and how long it ws running for. I left mine running while I got the producer ready to go, so it shows a fairly long duration 🙂 But you can run it again and start the producer immediately after it and you will see a much shorter run duration.

Well, there you go! That’s a Kafka producer and consumer using the new updated 23c version of the Kafka-compatible Java API for Transactional Event Queues. Stay tuned for more!

]]>
https://redstack.dev/2023/05/01/new-23c-version-of-kafka-compatible-java-apis-for-transactional-event-queues-published/feed/ 0 3931
Spring Boot Starters for Oracle updated https://redstack.dev/2023/04/24/spring-boot-starters-for-oracle-updated/ https://redstack.dev/2023/04/24/spring-boot-starters-for-oracle-updated/#respond <![CDATA[Mark Nelson]]> Mon, 24 Apr 2023 19:11:00 +0000 <![CDATA[Uncategorized]]> <![CDATA[oracle]]> <![CDATA[Spring]]> <![CDATA[springboot]]> <![CDATA[starter]]> <![CDATA[UCP]]> https://redstack.dev/?p=3923 <![CDATA[Hi everyone. We have just published some updates to the Spring Boot Starters for Oracle Database – we added a starter for UCP (Universal Connection Pool) for Spring 3.0.2. This makes it easy to access the Oracle Database from a … Continue reading ]]> <![CDATA[

Hi everyone. We have just published some updates to the Spring Boot Starters for Oracle Database – we added a starter for UCP (Universal Connection Pool) for Spring 3.0.2. This makes it easy to access the Oracle Database from a Spring Boot application – just two steps!

Add a dependency to your Maven POM file (or equivalent)

Here’s the dependency to add:

<dependency>
   <groupId>com.oracle.database.spring</groupId>
   <artifactId>oracle-spring-boot-starter-ucp</artifactId>
   <version>3.0.2</version>   <!-- or 2.7.7 for Spring Boot 2.x --> 
   <type>pom</type>
</dependency>

Add the datasource properties to your Spring Boot application.yaml

Here’s an example, assuming you are also using Spring Data JPA:

spring:
  application:
    name: aqjms
  jpa:
    hibernate:
      ddl-auto: update
    properties:
      hibernate:
        dialect: org.hibernate.dialect.Oracle12cDialect
        format_sql: true
      show-sql: true
  datasource:
    url: jdbc:oracle:thin:@//1.2.3.4:1521/pdb1
    username: someuser
    password: somepassword
    driver-class-name: oracle.jdbc.OracleDriver
    type: oracle.ucp.jdbc.PoolDataSource
    oracleucp:
      connection-factory-class-name: oracle.jdbc.pool.OracleDataSource
      connection-pool-name: AccountConnectionPool
      initial-pool-size: 15
      min-pool-size: 10
      max-pool-size: 30

That’s super easy, right?

We are working to add more Spring Boot Starters for Oracle Database to make it even easier to use, and to make sure we cover all the versions you need! Stay tuned for more updates!

p.s. If you use Spring Boot and Oracle Database, be sure to check out Oracle Backend for Spring Boot!

]]>
https://redstack.dev/2023/04/24/spring-boot-starters-for-oracle-updated/feed/ 0 3923
Oracle Backend for Spring Boot (and Parse Platform) introductory video plublished! https://redstack.dev/2023/04/20/oracle-backend-for-spring-boot-and-parse-platform-introductory-video-plublished/ https://redstack.dev/2023/04/20/oracle-backend-for-spring-boot-and-parse-platform-introductory-video-plublished/#respond <![CDATA[Mark Nelson]]> Thu, 20 Apr 2023 13:03:20 +0000 <![CDATA[Uncategorized]]> <![CDATA[backend]]> <![CDATA[microservices]]> <![CDATA[mobile]]> <![CDATA[oracle]]> <![CDATA[parse]]> <![CDATA[springboot]]> https://redstack.dev/?p=3920 <![CDATA[We just published a short YouTube video that introduces the Oracle Backend for Spring Boot (and Parse Platform) which makes it super easy to develop, run and manage Spring Boot microservices and mobile applications leveraging all the power of Oracle’s … Continue reading ]]> <![CDATA[

We just published a short YouTube video that introduces the Oracle Backend for Spring Boot (and Parse Platform) which makes it super easy to develop, run and manage Spring Boot microservices and mobile applications leveraging all the power of Oracle’s converged database.

I hope you can check it out!

]]>
https://redstack.dev/2023/04/20/oracle-backend-for-spring-boot-and-parse-platform-introductory-video-plublished/feed/ 0 3920
Implementing the Transactional Outbox pattern using Transactional Event Queues and JMS https://redstack.dev/2023/04/11/implementing-the-transactional-outbox-pattern-using-transactional-event-queues-and-jms/ https://redstack.dev/2023/04/11/implementing-the-transactional-outbox-pattern-using-transactional-event-queues-and-jms/#respond <![CDATA[Mark Nelson]]> Tue, 11 Apr 2023 20:15:45 +0000 <![CDATA[Uncategorized]]> <![CDATA[23c]]> <![CDATA[23c free]]> <![CDATA[JMS]]> <![CDATA[TEQ]]> <![CDATA[transactional outbox]]> <![CDATA[txeventq]]> https://redstack.dev/?p=3873 <![CDATA[Hi, in this post I want to provide an example of how to implement the Transactional Outbox pattern using Transactional Event Queues and JMS with the new Oracle Database 23c Free – Developer Release I mentioned in my last post. … Continue reading ]]> <![CDATA[

Hi, in this post I want to provide an example of how to implement the Transactional Outbox pattern using Transactional Event Queues and JMS with the new Oracle Database 23c Free – Developer Release I mentioned in my last post.

In the Transactional Outbox pattern, we have a microservice that needs to perform a database operation (like an insert) and send a message, and either both or neither of these need to happen.

Unlike other messaging providers, Transactional Event Queues is built-in to the Oracle Database and has the unique advantage of being able to expose the underlying database transaction to your application. This allows us perform database and messaging operations in the same transaction – which is exactly what we need to implement this pattern.

Prepare the database

The first thing we want to do is start up the Oracle 23c Free Database. This is very easy to do in a container using a command like this:

docker run --name free23c -d -p 1521:1521 -e ORACLE_PWD=Welcome12345 container-registry.oracle.com/database/free:latest

This will pull the image and start up the database with a listener on port 1521. It will also create a pluggable database (a database container) called “FREEPDB1” and will set the admin passwords to the password you specified on this command.

You can tail the logs to see when the database is ready to use:

docker logs -f free23c

(look for this message...)
#########################
DATABASE IS READY TO USE!
#########################

Also, grab the IP address of the container, we’ll need that to connect to the database:

docker inspect free23c | grep IPA
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.2",
                    "IPAMConfig": null,
                    "IPAddress": "172.17.0.2",

To set up the necessary permissions, you’ll need to connect to the database with a client. If you don’t have one already, I’d recommend trying the new SQLcl CLI which you can download here. Start it up and connect to the database like this (note that your IP address and password may be different):

sql sys/Welcome12345@//172.17.0.2:1521/freepdb1 as sysdba


SQLcl: Release 22.2 Production on Tue Apr 11 12:36:24 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

SQL>

Now, run these commands to create a user called “mark” and give it the necessary privileges:


SQL> create user mark identified by Welcome12345;

User MARK created.

SQL> grant resource , connect, unlimited tablespace to mark;

Grant succeeded.

SQL> grant execute on dbms_aq to mark;

Grant succeeded.

SQL> grant execute on dbms_aqadm to mark;

Grant succeeded.

SQL> grant execute on dbms_aqin to mark;

Grant succeeded.

SQL> grant execute on dbms_aqjms_internal to mark;

Grant succeeded.

SQL> grant execute on dbms_teqk to mark;

Grant succeeded.

SQL> grant execute on DBMS_RESOURCE_MANAGER to mark;

Grant succeeded.

SQL> grant select_catalog_role to mark;

Grant succeeded.

SQL> grant select on sys.aq$_queue_shards to mark;

Grant succeeded.

SQL> grant select on user_queue_partition_assignment_table to mark;

Grant succeeded.

SQL> exec  dbms_teqk.AQ$_GRANT_PRIV_FOR_REPL('MARK');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> quit;

Ok, we are ready to start on our Java code!

Create the Java project

If you have read my posts before, you’ll know I like to use Maven for my Java projects. Let’s create a Maven POM file (pom.xml) and add the dependencies we need for this application. I’ve also iunclude some profiles to make it easy to run the three main entry points we will create – one to create a queue, one to consume messages, and finally the transactional outbox implementation. Here’s the content for the pom.xml:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" 
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
         https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.example</groupId>
	<artifactId>txoutbox</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>txoutbox</name>

	<properties>
		<java.version>17</java.version>
		<maven.compiler.source>17</maven.compiler.source>
		<maven.compiler.target>17</maven.compiler.target>
        </properties>

    <dependencies>
        <dependency>
            <groupId>com.oracle.database.messaging</groupId>
            <artifactId>aqapi</artifactId>
            <version>21.3.0.0</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-databind</artifactId>
            <scope>compile</scope>
            <version>2.14.2</version>
        </dependency>
        <dependency>
            <groupId>javax.transaction</groupId>
            <artifactId>jta</artifactId>
            <version>1.1</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>jakarta.jms</groupId>
            <artifactId>jakarta.jms-api</artifactId>
            <scope>compile</scope>
            <version>3.1.0</version>
        </dependency>
        <dependency>
            <groupId>jakarta.management.j2ee</groupId>
            <artifactId>jakarta.management.j2ee-api</artifactId>
            <scope>compile</scope>
            <version>1.1.4</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.jdbc</groupId>
            <artifactId>ojdbc11</artifactId>
            <scope>compile</scope>
            <version>21.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.jdbc</groupId>
            <artifactId>ucp</artifactId>
            <scope>compile</scope>
            <version>21.3.0.0</version>
        </dependency>
    </dependencies>

    <profiles>
        <profile>
            <id>publish</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.example.Publish</argument>
                                <argument>jack</argument>
                                <argument>[email protected]</argument>
                                <argument>0</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>

        <profile>
            <id>consume</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.example.Consume</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>

        <profile>
            <id>createq</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.example.CreateTxEventQ</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
    </profiles>

</project>

I won’t go into a heap of detail on this or the first two Java classes, since they are fairly standard and I have talked about very similiar things before in older posts including this one for example. I will go into detail on the transactional outbox implementation though, don’t worry!

Create a Java class to create the queue

We are going to need a queue to put messages on, so let me show you how to do that in Java. Transactional Event Queues support various types of queues and payloads. This example shows how to create a queue that uses the JMS format. Create a file called src/main/com/example/CreateTxEventQ.java with this content:

package com.example;

import java.sql.SQLException;

import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;

import oracle.AQ.AQException;
import oracle.AQ.AQQueueTableProperty;
import oracle.jms.AQjmsDestination;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class CreateTxEventQ {

    private static String username = "mark";
    private static String password = "Welcome12345";
    private static String url = "jdbc:oracle:thin:@//172.17.0.2:1521/freepdb1";

    public static void main(String[] args) throws AQException, SQLException, JMSException {
        
        // create a topic session
        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setURL(url);
        ds.setUser(username);
        ds.setPassword(password);

        TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
        TopicConnection conn = tcf.createTopicConnection();
        conn.start();
        TopicSession session = (AQjmsSession) conn.createSession(true, Session.AUTO_ACKNOWLEDGE);

        // create properties
        AQQueueTableProperty props = new AQQueueTableProperty("SYS.AQ$_JMS_TEXT_MESAGE");
        props.setMultiConsumer(true);
        props.setPayloadType("SYS.AQ$_JMS_TEXT_MESSAGE");

        // create queue table, topic and start it
        Destination myTeq = ((AQjmsSession) session).createJMSTransactionalEventQueue("my_txeventq", true);
        ((AQjmsDestination) myTeq).start(session, true, true);

    }

}

As you read through this, you’ll see I’ve just hardcoded the username, password and url for convenience in this file (and the others in this post), of course we’d never do that in real life, would we 🙂 You should also notice that we get a connection, then create the queue table first, set the consumer type (multiple, i.e. pub/sub – so a JMS Topic) and the format (JMS) and the queue itself, and then start it up. Easy, right?

You can run this and create the queue with this command:

mvn exec:exec -Pcreateq

If you want to see the queue in the database, you can log in using that mark user you created and run a query:

$ sql mark/Welcome12345@//172.17.0.2:1521/freepdb1

SQLcl: Release 22.2 Production on Tue Apr 11 15:18:58 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.


Last Successful login time: Tue Apr 11 2023 15:18:59 -04:00

Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

SQL> select * from USER_QUEUES ;

NAME           QUEUE_TABLE         QID QUEUE_TYPE         MAX_RETRIES    RETRY_DELAY ENQUEUE_ENABLED    DEQUEUE_ENABLED    RETENTION    USER_COMMENT    NETWORK_NAME    SHARDED    QUEUE_CATEGORY               RECIPIENTS
______________ ______________ ________ _______________ ______________ ______________ __________________ __________________ ____________ _______________ _______________ __________ ____________________________ _____________
MY_TXEVENTQ    MY_TXEVENTQ       78567 NORMAL_QUEUE                 5              0   YES                YES              0                                            TRUE       Transactional Event Queue    MULTIPLE

While you’re there, let’s also create a table so we have somewhere to perform the database insert operation:

create table customer ( name varchar2(256), email varchar2(256) ); 

Create the consumer

Let’s create the consumer next. This will be a new Java file in the same directory called Consume.java. Here’s the content:

package com.example;

import java.sql.SQLException;

import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;

import oracle.AQ.AQException;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.jms.AQjmsTextMessage;
import oracle.jms.AQjmsTopicSubscriber;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class Consume {

    private static String username = "mark";
    private static String password = "Welcome12345";
    private static String url = "jdbc:oracle:thin:@//172.17.0.2:1521/freepdb1";
    private static String topicName = "my_txeventq";

    public static void main(String[] args) throws AQException, SQLException, JMSException {

        // create a topic session
        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setURL(url);
        ds.setUser(username);
        ds.setPassword(password);

        // create a JMS topic connection and session
        TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
        TopicConnection conn = tcf.createTopicConnection();
        conn.start();
        TopicSession session = 
           (AQjmsSession) conn.createSession(true, Session.AUTO_ACKNOWLEDGE);

        // create a subscriber on the topic
        Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
        AQjmsTopicSubscriber subscriber = 
           (AQjmsTopicSubscriber) session.createDurableSubscriber(topic, "my_subscriber");

        System.out.println("Waiting for messages...");

        // wait forever for messages to arrive and print them out
        while (true) {

            // the 1_000 is a one second timeout
            AQjmsTextMessage message = (AQjmsTextMessage) subscriber.receive(1_000); 
            if (message != null) {
                if (message.getText() != null) {
                    System.out.println(message.getText());
                } else {
                    System.out.println();
                }
            }
            session.commit();
        }
    }

}

This one is a fairly standard JMS consumer. It is going to create a subscription to that topic we just created, and wait for messages to arrive, and then just print the content on the screen. Nice and simple. You can run this with this command:

mvn exec:exec -Pconsume

Leave that running so that you see messages as they are produced. Later, when you run the transactional outbox producer, run it in a different window so that you can see what happens in the consumer.

Implement the Transactional Outbox pattern

Yay! The fun part! Here’s the code for this class, which will go into a new Java file in the same dircetory called Publish.java. I’ll walk through this code step by step.

package com.example;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Statement;

import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;

import oracle.AQ.AQException;
import oracle.jms.AQjmsAgent;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.jms.AQjmsTextMessage;
import oracle.jms.AQjmsTopicPublisher;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class Publish {

    private static String username = "mark";
    private static String password = "Welcome12345";
    private static String url = "jdbc:oracle:thin:@//172.17.0.2:1521/freepdb1";
    private static String topicName = "my_txeventq";

    public static void main(String[] args) throws JMSException, SQLException {

        AQjmsTopicPublisher publisher = null;
        TopicSession session = null;
        TopicConnection tconn = null;
        Connection conn = null;

        if (args.length != 3) {
            System.err.println("""
                You must provide 3 arguments - name, email and failure mode
                failure mode:
                  0    do not fail
                  1    fail before insert and publish
                  2    fail after insert, before publish
                  3    fail after insert and publlsh
            """);
        }
        String name = args[0];
        String email = args[1];
        int failMode = Integer.parseInt(args[2]);

        try {
            // create a topic session
            PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
            ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
            ds.setURL(url);
            ds.setUser(username);
            ds.setPassword(password);

            // create a JMS topic connection and session
            TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
            tconn = tcf.createTopicConnection();
            tconn.start();

            // open a Transactional session
            session = (AQjmsSession) tconn.createSession(true, Session.AUTO_ACKNOWLEDGE);

            // also get the JDBC connection
            conn = ((AQjmsSession) session).getDBConnection();
            conn.setAutoCommit(false);

            // if failMode = 1, fail here
            if (failMode == 1) throw new Exception();

            // first, perform the database operation
            PreparedStatement stmt = conn.prepareStatement("insert into customer (name, email) values (?, ?)");
            stmt.setString(1,  name);
            stmt.setString(2, email);
            stmt.executeUpdate();
            System.out.println("row inserted");

            // if failMode = 2, fail here
            if (failMode == 2) throw new Exception();

            // second, publish the message
            Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
            publisher = (AQjmsTopicPublisher) session.createPublisher(topic);

            AQjmsTextMessage message = (AQjmsTextMessage) session.createTextMessage("new customer with name=" + name + " and email=" + email);
            publisher.publish(message, new AQjmsAgent[] { new AQjmsAgent("my_subscription", null) });
            System.out.println("message sent");

            // if failMode = 3, fail here
            if (failMode == 3) throw new Exception();        

            // we didn't fail - so commit the transaction
            if (failMode == 0) session.commit();

        } catch (Exception e) {
            System.err.println("rolling back");
            if (conn != null) conn.rollback();
        } finally {
            // clean up
            if (publisher != null) publisher.close();
            if (session != null) session.close();
            if (tconn != null) tconn.close();
        }
    }

}

Ok, so the overall structure of the code is as follows:

First, we are going to start a transaction. Then we will perform two operations – insert a record into the customer table, and send a message on a topic. If eevrything works as expected, we will commit the transaction. Of course, if there is a failure at any point, we will rollback instead. Notice the arrows are labeled with numbers – in the code I have included failure points that correspond to each of these arrows.

At the start of the main method, we are going to check we have the expected arguments — the name and email, and the point at which to fail, i.e., which of those arrows to simulate a failure at. A “0” indicates that no failure should be simulated. So if we run the code with “mark [email protected] 2” as the input, we expect it to fail on the “2” arrow – after it inserted the row in the table and before it sent the message on the topic.

Next we get both a JMS Connection and a JDBC Connection. This is important because it allows us to have a single transaction. Note the following lines:

// open a Transactional session
session = (AQjmsSession) tconn.createSession(true, Session.AUTO_ACKNOWLEDGE);

also get the JDBC connection
conn = ((AQjmsSession) session).getDBConnection();
conn.setAutoCommit(false);

We explicity set the “auto commit” to false on the JDBC connection – we want to control exactly if and when work is commited, we do not want any automatic commits to occur. And on the JMS session we set the “transacted” parameter to true. That’s the first parameter in the createSession() call. This tells it to use the same database transaction.

Next, you will notice that we simulate a failure if the failure point was “1”:

if (failMode == 1) throw new Exception();

If an exception is thrown at this point (or any point), we’d expect to see no new rows in the database and no messages recieved by the consumer. We can check the table with this query:

select * from customer;

And you will see output like this in the consumer window every time a message is produced, so if you do not see that output – no messages:

new customer with name=jack and [email protected]

You can also check directly in the database with this query:

select * from my_txeventq;

The next thing you will see is a standard JDBC Prepared Statement to insert a row into the customer table. Notice that I don’t commit yet.

PreparedStatement stmt = conn.prepareStatement("insert into customer (name, email) values (?, ?)");
stmt.setString(1,  name);
stmt.setString(2, email);
stmt.executeUpdate();
System.out.println("row inserted");

Then you will see failure point “2”.

And next, we have the code to publish a message on the topic:

AQjmsTextMessage message = (AQjmsTextMessage) session.createTextMessage("new customer with name=" + name + " and email=" + email);
publisher.publish(message, new AQjmsAgent[] { new AQjmsAgent("my_subscription", null) });
System.out.println("message sent");

Then you’ll see failure point “3” and then finally the commit!

Next, notice that the catch block contains a rollback on the database connection. You don’t have to rollback the JMS session as well – since they are in the same transaction, this one rollback call is enough to rollback all of the operations.

Run the Transactional Outbox code

Now we’re ready to run the code! First, notice in the POM file we created a profile called “publish” whic contains the following configuration:

<configuration>
  <executable>java</executable>
  <arguments>
    <argument>-Doracle.jdbc.fanEnabled=false</argument>
    <argument>-classpath</argument>
    <classpath/>
    <argument>com.example.Publish</argument>
    <argument>jack</argument>
    <argument>[email protected]</argument>
    <argument>0</argument>
  </arguments>
</configuration>

The last three arguments are the name, email and the failure point. If you go ahead and run it as is (with failure point 0, meaning no failure) then it should actually get all the way through to the commit. You should see output in the consumer window to let you know the message was produced, and you can check the table in the database to see the new record in there. Run the code like this:

mvn exec:exec -Pproduce

Of course, you’ll see a record in the table and the message.

If you now edit the POM file and change that last argument from 0 to any of the other options and run it again, you’ll notice that it rolls back and you do not get a new record in the table or a message produced on the topic.

How do I know it really worked?

If you’d like to experiment and convince yourself it really is working, try something like commenting out failure point 2 like this:

// if (failMode == 2) throw new Exception();

When you run the code again, you will now see that there is a row in the database that was not rolled back (because the failure never occured and the exception was never thrown) but the message was never sent (becuase the commit was never run due to failMode being 2, not 0).

If you tweak the failure points you can easily convince yourself that it is in fact working just as expected 🙂

So there you go, that’s the Transactional Outbox pattern implemented using Transactional Event Queues with Oracle Database 23c Free – that was pretty easy, right? Hope you enjoyed it, and see you soon!

]]>
https://redstack.dev/2023/04/11/implementing-the-transactional-outbox-pattern-using-transactional-event-queues-and-jms/feed/ 0 3873
Big news today – Oracle Database 23c Free—Developer Release just released! https://redstack.dev/2023/04/03/big-news-today-oracle-database-23c-free-developer-release-just-released/ https://redstack.dev/2023/04/03/big-news-today-oracle-database-23c-free-developer-release-just-released/#comments <![CDATA[Mark Nelson]]> Mon, 03 Apr 2023 23:11:12 +0000 <![CDATA[Uncategorized]]> <![CDATA[23c]]> <![CDATA[database]]> <![CDATA[duality]]> <![CDATA[free]]> <![CDATA[javascript stored procedures]]> <![CDATA[oracle]]> https://redstack.dev/?p=3867 <![CDATA[Hi everyone! Big news today, just announced at Oracle CloudWorld in Singapore! The new Oracle Database 23c Free – Developer Release is now available. Oracle Database 23c Free – Developer Release is the first release of the next-generation Oracle Database, … Continue reading ]]> <![CDATA[

Hi everyone! Big news today, just announced at Oracle CloudWorld in Singapore!

The new Oracle Database 23c Free – Developer Release is now available.

Oracle Database 23c Free – Developer Release is the first release of the next-generation Oracle Database, allowing developers a head-start on building applications with innovative 23c features that simplify development of modern data-driven apps. The entire  feature set of Oracle Database 23c is planned to be generally available within the next 12 months.

It has heaps of new developer-focused features and its completely free! And easy to download and use!

My two favorite features are:

  • the new JSON Relational Duality Views which allow you to create a JSON document representation from a number of existing tables, and they are read/write! So you can use JSON in your applications and have the underlying data stored in relational tables. Of course you can store it in JSON too if you want to!
  • JavaScript Stored Procedures, or as I like to think of them – in-database microservices which can scale to zero, with fast startup and scaling, and resource management to prevent noisy neighbors!

I look forward to writing posts about those, and some other exicting new features really soon.

You can find it here: https://www.oracle.com/database/free/

]]>
https://redstack.dev/2023/04/03/big-news-today-oracle-database-23c-free-developer-release-just-released/feed/ 1 3867
Session catalog for DevLive Level Up 2023 released! https://redstack.dev/2023/03/01/session-catalog-for-devlive-level-up-2023-released/ https://redstack.dev/2023/03/01/session-catalog-for-devlive-level-up-2023-released/#respond <![CDATA[Mark Nelson]]> Wed, 01 Mar 2023 19:53:48 +0000 <![CDATA[Uncategorized]]> <![CDATA[DevLive]]> <![CDATA[flutter]]> <![CDATA[level up]]> <![CDATA[microservices]]> <![CDATA[parse]]> <![CDATA[Spring]]> <![CDATA[springboot]]> https://redstack.dev/?p=3863 <![CDATA[Hi again! In this earlier post, I mentioned that I am speaking at Level Up 2023. The session catalog has just been released on the event website. You can find my sessions in this stream: Data strategies for developers – … Continue reading ]]> <![CDATA[

Hi again! In this earlier post, I mentioned that I am speaking at Level Up 2023. The session catalog has just been released on the event website. You can find my sessions in this stream:

Data strategies for developers – Sessions at a glance

I hope to see some of you there!

]]>
https://redstack.dev/2023/03/01/session-catalog-for-devlive-level-up-2023-released/feed/ 0 3863
I’m speaking at Level Up 2023 https://redstack.dev/2023/02/17/im-speaking-at-level-up-2023/ https://redstack.dev/2023/02/17/im-speaking-at-level-up-2023/#comments <![CDATA[Mark Nelson]]> Fri, 17 Feb 2023 20:03:00 +0000 <![CDATA[Uncategorized]]> <![CDATA[conductor]]> <![CDATA[eureka]]> <![CDATA[flutter]]> <![CDATA[level up]]> <![CDATA[oracle]]> <![CDATA[parse]]> <![CDATA[redwood shores]]> <![CDATA[Spring]]> <![CDATA[springboot]]> https://redstack.dev/?p=3858 <![CDATA[Hi! I am going to be speaking at the Level Up 2023 event at Oracle Redwood Shores in March. I will talking about our new Developer Previews for both Oracle Backend for Spring Boot and Oracle Backend for Parse Platform, … Continue reading ]]> <![CDATA[

Hi! I am going to be speaking at the Level Up 2023 event at Oracle Redwood Shores in March. I will talking about our new Developer Previews for both Oracle Backend for Spring Boot and Oracle Backend for Parse Platform, and running a hands on lab where we will use those to build a “Cloud Banking” application in Spring Boot complete with a web and mobile front end user interface. In the lab we’ll explore topics like service discovery, external configuration, workflow, API management, fault tolerance and observability.

If you’re in the Bay Area and you’d like to attend in person – or if you’d like to attend from anywhere digitally – you can find more information and register here:

https://developer.oracle.com/community/events/devlive-level-up-march-2023.html

]]>
https://redstack.dev/2023/02/17/im-speaking-at-level-up-2023/feed/ 1 3858
A first Spring Boot microservice with Oracle https://redstack.dev/2023/02/03/a-first-spring-boot-microservice-with-oracle/ https://redstack.dev/2023/02/03/a-first-spring-boot-microservice-with-oracle/#respond <![CDATA[Mark Nelson]]> Fri, 03 Feb 2023 18:40:08 +0000 <![CDATA[Uncategorized]]> <![CDATA[jpa]]> <![CDATA[oracle]]> <![CDATA[Spring]]> <![CDATA[springboot]]> https://redstack.dev/?p=3773 <![CDATA[In this post, I want to walk through creating a first simple Spring Boot microservice using Oracle. If you want to follow along, see this earlier post about setting up a development environment. I want to create a “customer” microservice … Continue reading ]]> <![CDATA[

In this post, I want to walk through creating a first simple Spring Boot microservice using Oracle. If you want to follow along, see this earlier post about setting up a development environment.

I want to create a “customer” microservice that I can use to create/register customers, and to get customer details. I want the customer information to be stored in my Oracle database. I am going to create a dedicated schema for this microservice, where it will keep its data. I could create a separate pluggable database, but that seems a little excessive given the simplicity of this service.

So my “customer” data will have the following attributes:

  • Customer ID
  • First name
  • Surname
  • Email address

My service will have endpoints to:

  • Create a customer
  • List all customers
  • Get a customer by ID

I am going to use Spring 3.0.0 with Java 17 and Maven. Spring 3.0.0 was just released (when I started writing this post) and has support for GraalVM native images and better observability and tracing.

Create the project

Let’s start by creating a project. If you set up your development environment like mine, with Visual Studio Code and the Spring Extension Pack, you can type Ctrl+Shift+P to bring up the actions and type in “Spring Init” to find the “Spring Initializr: Create a Maven project” action, then hit enter.

It will now ask you a series of questions. Here’s how I set up my project:

  • Spring Boot Version = 3.0.0
  • Language = Java
  • Group ID = com.redstack
  • Artifact ID = customer
  • Packaging = JAR
  • Java version = 17
  • Dependencies:
    • Spring Web
    • Spring Data JPA
    • Oracle Driver

After that, it will ask you which directory to create the project in. Once you answer all the questions, it will create the project for you and then give you the option to open it (in a new Visual Studio Code window.)

Note: If you prefer, you can go to the Spring Initializr website instead and answer the same questions there instead. It will then generate the project and give you a zip file to download. If you choose this option, just unzip the file and open it in Visual Studio Code.

Whichever approach you take, you should end up with a project open in Code that looks a lot like this:

I like to trim out a few things that we don’t really need. I tend to delete the “.mvn” directory, the “mvnw” and “mvnw.cmd” files and the “HELP.md” file. Now is also a great time to create a git repository for this code. I like to add/commit all of these remaining files and keep that as my starting point.

Explore the generated code

Here’s the Maven POM (pom.xml) that was generated:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>3.0.0</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.redstack</groupId>
	<artifactId>customer</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>customer</name>
	<description>Demo project for Spring Boot</description>
	<properties>
		<java.version>17</java.version>
	</properties>
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-jpa</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>

		<dependency>
			<groupId>com.oracle.database.jdbc</groupId>
			<artifactId>ojdbc8</artifactId>
			<scope>runtime</scope>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>

There’s a few things to note here. The parent is the standard spring-boot-starter-parent and this will bring in a bunch of useful defaults for us. The dependencies list contains the items we chose in the Spring Initializr (as expected) and finally, note the build section has the spring-boot-maven-plugin included. This will let us build and run the Spring Boot application easily from maven (with “mvn spring-boot:run“).

Let’s add one more dependency:

<dependency>
	<groupId>org.projectlombok</groupId>
	<artifactId>lombok</artifactId>
	<version>1.18.26</version>
</dependency>

Lombok offers various annotations aimed at replacing Java code that is well known for being boilerplate, repetitive, or tedious to write. We’ll use it to avoid writing getters, setters, constructors and builders.

And here is the main CustomerApplication Java class file:

package com.redstack.customer;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class CustomerApplication {

	public static void main(String[] args) {
		SpringApplication.run(CustomerApplication.class, args);
	}

}

Nothing much to see here. Notice it has the SpringBootApplciation annotation.

Define the Customer Entity

Let’s go ahead and define our data model now. Since we are using JPA, we define our data model using a POJO. Create a Customer.java file in src/main/java/com/redstack/customer with this content:

package com.redstack.customer;

import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;

import jakarta.persistence.Entity;
import jakarta.persistence.GeneratedValue;
import jakarta.persistence.GenerationType;
import jakarta.persistence.Id;
import jakarta.persistence.SequenceGenerator;

@Data
@Builder
@AllArgsConstructor
@NoArgsConstructor
@Entity
public class Customer {

    @Id
    @SequenceGenerator(
            name = "customer_id_sequence",
            sequenceName = "customer_id_sequence"
    )
    @GeneratedValue(
            strategy = GenerationType.SEQUENCE,
            generator = "customer_id_sequence"
    )
    private Integer id;
    private String firstName;
    private String lastName;
    private String email;
}

Starting from the bottom, we see the definition of the four fields that we wanted for our Customer entity – ID, first and last names, and email address.

The id field has some annotations on it. First it has @Id which identifies it as the key. Then we have a @SequenceGenerator annotation, which tells JPA that we want to create a “sequence” in the database and gives it a name. A sequence is a database object from which multiple users may generate unique integers. The last annotation, @GeneratedValue tells JPA that this field should be populated from that sequence.

The class also has some annotations on it. It has the JPA @Entity annotation which tells JPA that this is an entity that we want to store in the database. The other annotations are Lombok annotations to save us writing a bunch of boilerplate code. @Data generates getters for all fields, a useful toString method, and hashCode and equals implementations that check all non-transient fields. It will also generate setters for all non-final fields, as well as a constructor. @Builder generates some nice APIs to create instances of our object – we’ll see how we use it later on. And @AllArgsConstructor and @NoArgsConstructor generate pretty much what their names suggest they do.

Set up the Spring Boot Application Properties

Ok, next let’s set up the JPA configuration in the Spring Boot Application Properties. You will find a file called application.properties in src/main/resources. This file can be in either the “properties” format, or in YAML. I personally prefer to use YAML, so I renamed that file to application.yaml and here is the content:

server:
  port: 8080

spring:
  application:
    name: customer
  datasource:
    username: 'customer'
    url: jdbc:oracle:thin:@//172.17.0.2:1521/pdb1
    password: 'Welcome123'
    driver-class-name: oracle.jdbc.driver.OracleDriver
  jpa:
    properties:
      hibernate:
        dialect: org.hibernate.dialect.OracleDialect
        format-sql: 'true'
    hibernate:
      ddl-auto: update
    show-sql: 'true'

Let’s look at what we have here. First we set the port to 8080, and the application’s name to “customer”. If you prefer to use the properties format these first two setting would like like this:

server.port=8080
spring.application.name=customer

After that we set up the data source. You can provide the JDBC URL for your Oracle Database, and the username and password and the JBDC driver class, as shown. Note that the use will need to actually exist. You can create the user in the database by running these statements as an admin user:

create user customer identified by Welcome123;
grant connect, resource to customer;
alter user customer quota unlimited on users;
commit;

The final section of config we see here is the JPA configuration where we need to declare which “dialect” we are using – this identifies what kind of SQL should be generated, in our case Oracle. The format-sql and show-sql settings are jsut there to make the SQL statements we see in logs easier for us to read.

The ddl-auto setting is interesting. Here’s a good article that explains the possible values and what they do. We’ve used update in this example, which “instructs Hibernate to update the database schema by comparing the existing schema with the entity mappings and generate the appropriate schema migration scripts.” That’s a resonable choice for this scenario, but you shoudl be aware that there are probably better choices in some cases. For example, if you are actively developing the entity and making changes to it, create-drop might be better for you. And if the database objects already exist and you just want to use them, then none might be the best choice – we’ll talk more about this in a future post!

Create the JPA Repository Class

Next, let’s create the JPA Repository class which we can use to save, retrieve and delete entities in/from the database. Create a file called CustomerRepository.java in src/main/java/com/redstack/customer with this content:

package com.redstack.customer;

import org.springframework.data.jpa.repository.JpaRepository;

public interface CustomerRepository extends JpaRepository<Customer, Integer> {
}

Ok, that takes care of our JPA work. Now, let’s get started on our services.

Create the Customer Service

Let’s start with a service to register (create) a new customer. We can start by defining the input data that we expect. Let’s create a CustomerRegistrationRequest.java in the same directory with this content:

package com.redstack.customer;

public record CustomerRegistrationRequest(
    String firstName,
    String lastName,
    String email) {
}

Notice that we did not include the ID, because we are going to get that from the database sequence. So we just need the client/caller to give us the remaining three fields.

Next, we can create our controller. Create a new file called CustomerController.java in the same directory with this content:

package com.redstack.customer;

import org.springframework.http.ResponseEntity;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("api/v1/customers")
public record CustomerController(CustomerService service) {

    @PostMapping
    @ResponseBody
    public ResponseEntity<String> registerCustomer(@RequestBody CustomerRegistrationRequest req) {
        service.registerCustomer(req);
        return ResponseEntity.status(HttpStatus.CREATED).body("Customer registered successfully.\n");
    }
}

So here we used a Java record to define the controller, and we ask Spring to inject the CustomerService for us. Obviously, we have not created that yet, we’ll get to that in a minute! The reocrd has two annotations – @RestController tells spring to expose a REST API for this record, and @RequestMapping lets us set up the URL path for this controller. Since we set the port to 8080 earlier, and assuming we just run this on our development machine for now, this REST API will have a URL of http://localhost:8080/api/v1/customers.

Next we can define the handlers. Here we have just the first one, to handle HTTP POST requests. We will add others later. Our registerCustomer method will be exposed as the handler for POST requests, because we gavt it the @PostMapping annotation, and it will be able to return an HTTP response with a status code and body becauase we gave it the @RepsonseBody annotation. This method accepts the CustomerRegistrationRequest that we defined earlier. Notice that we add the @RequestBody annotation to that method argument. This tells Spring that the data will be provided by the caller as JSON in the HTTP Request Body (as opposed to being in a query or header, etc.) And this handler simply calls the registerCustomer method in the service and passes through the data.

So, its time to write the service! Create a new file called CusotmerService.java in the same directory with this content:

package com.redstack.customer;

import org.springframework.stereotype.Service;

@Service
public record CustomerService(CustomerRepository repository) {

    public void registerCustomer(CustomerRegistrationRequest req) {
        Customer customer = Customer.builder()
                .firstName(req.firstName())
                .lastName(req.lastName())
                .email(req.email())
                .build();
        repository.saveAndFlush(customer);
    }
}

Again, we are using a Java record for the service. Records are immutable data classes that require only the type and name of fields. The equalshashCode, and toString methods, as well as the private, final fields and public constructor, are generated by the Java compiler. You can also include static variables and methods in records. I’m using them here to save a bunch of boilerplate code that I do not want to write.

We put the @Service annotation on the record to tell Spring that this is a service. In the record arguments, we have Spring inject an instance of our CustomerRepository which we will need to talk to the database.

For now, we just need one method in our service, registerCustomer(). We’ll add more later. This method also accepts the CustomerRegistrationRequest and the first thing we do with it is create a new Customer entity object. Notice that we are using the builder that we auto-generated with Lombok – we never wrote any code to create this builder! Yay! Then, all we need to do is use our JPA repository’s saveAndFlush() method to save that customer in the database. saveAndFlush will do an INSERT and then a COMMIT in the database.

Time to test the application!

Let’s start up our service and test it! Before we start, you might want to connect to your database and satisfy yourself that there is no CUSTOMER table there:

sql customer/Welcome123@//172.17.0.2:1521/pdb1
SQL> select table_name from user_tables;

no rows selected

To run the service, run this Maven command:

mvn spring-boot:run

This will compile the code and then run the service. You will see a bunch of log messages appear. In around the middle you should see something like this:


2023-02-03T11:15:37.827-05:00  INFO 8488 --- [           main] SQL dialect                              : HHH000400: Using dialect: org.hibernate.dialect.OracleDialect
Hibernate: create global temporary table HTE_customer(id number(10,0), email varchar2(255 char), first_name varchar2(255 char), last_name varchar2(255 char), rn_ number(10,0) not null, primary key (rn_)) on commit delete rows
Hibernate: create table customer (id number(10,0) not null, email varchar2(255 char), first_name varchar2(255 char), last_name varchar2(255 char), primary key (id))

There’s the SQL that it ran to create the CUSTOMER table for us! If you’d like to, you can check in the database with this statement:

SQL> describe customer;

Name          Null?       Type
_____________ ___________ _____________________
ID            NOT NULL    NUMBER(10)
EMAIL                     VARCHAR2(255 CHAR)
FIRST_NAME                VARCHAR2(255 CHAR)
LAST_NAME                 VARCHAR2(255 CHAR)

You can also take a look at the sequence if you would like to:

SQL> select sequence_name, min_value, increment_by, last_number from user_sequences;

SEQUENCE_NAME              MIN_VALUE   INCREMENT_BY    LAST_NUMBER
_______________________ ____________ _______________ ______________
CUSTOMER_ID_SEQEUNCE               1              50           1001

Now, let’s invoke the service to test it! We can invoke the service using cURL, we need to do a POST, set the Content-Type header and provide the data in JSON format:

$ curl -i \
   -X POST \
   -H 'Content-Type: application/json' \
   -d '{"firstName": "Mark", "lastName": "Nelson", "email": "[email protected]"}' \
    http://localhost:8080/api/v1/customers
HTTP/1.1 201
Content-Type: text/plain;charset=UTF-8
Content-Length: 34
Date: Fri, 03 Feb 2023 17:41:39 GMT

Customer registered successfully.

The “-i” tells cURL to pring out the response. You can see that we got a HTTP 201 (created), i.e., success!

Now we see the new record in the database, as expected:

SQL> select * from customer ;

   ID EMAIL            FIRST_NAME    LAST_NAME
_____ ________________ _____________ ____________
    1 [email protected]    Mark          Nelson

Great, that is working the way we wanted, so we can create customers and have them stored in the database. Now let’s add some endpoints to query customers from the database.

Add a “get all customers” endpoint

The first endpoint we want to add will allow us to get a list of all customers. To do this, let’s add this new method to our controller:

// add these imports
import java.util.List;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.ResponseStatus;

// ...

    @GetMapping(produces = {MediaType.APPLICATION_JSON_VALUE})
    @ResponseBody
    @ResponseStatus(HttpStatus.OK)
    public List<Customer> getAllCustomers() {
        return service.getAllCustomers();
    }

Here we have a getAllCustomers() method that simply calls the corresponding method in the service (we’ll write that in a moment) and returns the results. Of course, we have some annotations too. The @GetMapping tells Spring Boot that this method will be exposed as an HTTP GET method handler. The produces defines the output body’s Content-Type, in this case it will be “application/json“. The @ResponseStatus sets the HTTP status code.

Here’s the method we need ot add to our CustomerService, notice it just uses a built-in method on the repository to get the data, its very simple:

// add this import
import java.util.List;

// ...

    public List<Customer> getAlCustomers() {
        return repository.findAll();
    }

With those changes in place, we can restart the service and call this new GET endpoint like this:

$ curl -i http://localhost:8080/api/v1/customers
HTTP/1.1 200
Content-Type: application/json
Transfer-Encoding: chunked
Date: Fri, 03 Feb 2023 17:55:17 GMT

[{"id":1,"firstName":"Mark","lastName":"Nelson","email":"[email protected]"}]

You might like to do a few more POSTs and another GET to observe what happens.

Add a “get customer by ID” endpoint

Let’s add the final endpoint that we wanted in our service. We want to be able to get a specific customer using the ID. Here’s the code to add to the controller:

// add these imports
import java.util.Optional;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;

// ...

    @GetMapping(path="/{id}", produces = {MediaType.APPLICATION_JSON_VALUE})
    @ResponseBody
    public ResponseEntity<Customer> getCustomer(@PathVariable Integer id) {
        Optional<Customer> c = service.getCustomer(id);
        if (c.isPresent()) {
            return ResponseEntity.status(HttpStatus.OK).body(c.get());
        } else {
            return ResponseEntity.status(HttpStatus.NOT_FOUND).body(null);
        }
    }

Here we see some differences to the previous endpoint implementation. This one is a little more sophisticated. First, we have added a path to the @GetMapping annotation to add a positional parameter to the end of the path, so this endpoint will be /api/v1/customers/{id}. In the method arguments we have a @PathVariable annotation to grab that {id} from the path and use it as an argument to our method.

Also, notice that the method returns ResponseEntity<Customer>. This gives us some more control over the response, and allows us to set different HTTP status codes (and if we wanted to we could also control the headers, body, etc.) based on our own business logic.

Inside this method we call our service’s (soon to be written) getCustomer(id) method which returns an Optional<Customer>. Then we check if the Optional actually contains a Customer, indicating that a customer entity/record was found for the specified id, and if so we return it along with an HTTP 200 (OK). If the Optional is empty, then return an HTTP 404 (not found).

Here’s the new method to add to the service:

// add this import
import java.util.Optional;

// ...

    public Optional<Customer> getCustomer(Integer id) {
        return repository.findById(id);
    }

This one is fairly sinple, we are just calling a standard built-in method on the JPA Repository class to get the data.

Now we can restart the application, and test the new endpoint by asking for customers that we know exist, and do not exist to observe the different outcomes:

$ curl -i http://localhost:8080/api/v1/customers/1
HTTP/1.1 200
Content-Type: application/json
Transfer-Encoding: chunked
Date: Fri, 03 Feb 2023 18:15:30 GMT

{"id":1,"firstName":"Mark","lastName":"Nelson","email":"[email protected]"}

$ curl -i http://localhost:8080/api/v1/customers/5
HTTP/1.1 404
Content-Length: 0
Date: Fri, 03 Feb 2023 18:15:37 GMT

Notice the HTTP status codes are different in each case. Also, notice that the JSON returned when a customer is found is just one JSON object {…} not a list [{…}, … ,{…}] as in the get all customers endpoint.

Conclusion

Well there you have it, we have completed our simple customer microservice built using Spring Boot and Oracle Database. I hope you followed along and built it too, and enjoyed learing a bit about Spring Boot and Oracle! Stay tuned for more posts on this topic, each covering a little more advanced toopic than the last. See you soon!

]]>
https://redstack.dev/2023/02/03/a-first-spring-boot-microservice-with-oracle/feed/ 0 3773
Two new Backend as a Service offerings live now! https://redstack.dev/2022/12/21/two-new-backend-as-a-service-offerings-live-now/ https://redstack.dev/2022/12/21/two-new-backend-as-a-service-offerings-live-now/#comments <![CDATA[Mark Nelson]]> Wed, 21 Dec 2022 15:56:09 +0000 <![CDATA[Uncategorized]]> <![CDATA[ebaas]]> <![CDATA[parse]]> <![CDATA[spingboot]]> <![CDATA[Spring]]> <![CDATA[spring cloud]]> https://redstack.dev/?p=3790 <![CDATA[Hi everyone! For the last few months I have been working on two projects which have just gone live with their first “Developer Preview” releases. If you’d like to check them out and see what I have been up to, … Continue reading ]]> <![CDATA[

Hi everyone!

For the last few months I have been working on two projects which have just gone live with their first “Developer Preview” releases.

If you’d like to check them out and see what I have been up to, have a look at:

It’s been a lot of fun working on these, and I am really happy to be able to tell you about them at last!

The Oracle Mobile Backend as a Service offering is built on top of Parse and lets you easily build mobile and web apps using any of the Parse SDKs and have all your data stored in an Oracle Autonomous Database in JSON collections. It also includes the Parse Dashboard for managing your applicaiton data. Its easy to install from OCI Marketplace and once the install is done, you can start hitting those APIs and building your apps right away!

The Oracle Backend as a Serivce for Spring Cloud lets you easily install a comprehensive runtime environment for Spring Boot applications including a Kubernetes (OKE) cluster, Spring Config Server (with the config data in Oracle Autonomous Database), Spring Eureka Service Registry, APISIX API Gateway and Dashboard, Netflix Conductor, Spring Admin Dashboard, Prometheus, Grafana, Jaeger and Open Telemetry. You can build apps using Spring Data with JPA or JDBC access to the Oracle Autonomous Database. We have included a sample custom Spring component for using Oracle Transactional Event Queueing. There’s a CLI to manage deploying apps into the environment, managing configuration and database schema for services. We also included a set of sample applications that demonstrate how to use the platform – these include service discovery, fault tolerance, distributed tracing and so on.

As “Developer Preview” implies – there’s much more to come in this space!

I am planning to write more blog posts really soon to demonstrate how to use both of these offerings. I hope you’ll check them out!

]]>
https://redstack.dev/2022/12/21/two-new-backend-as-a-service-offerings-live-now/feed/ 1 3790
Development environment setup for Spring Boot with Oracle https://redstack.dev/2022/12/08/development-environment-setup-for-spring-boot-with-oracle/ https://redstack.dev/2022/12/08/development-environment-setup-for-spring-boot-with-oracle/#comments <![CDATA[Mark Nelson]]> Thu, 08 Dec 2022 14:29:27 +0000 <![CDATA[Uncategorized]]> <![CDATA[Java]]> <![CDATA[Maven]]> <![CDATA[oracle]]> <![CDATA[Spring]]> <![CDATA[springboot]]> <![CDATA[vscode]]> https://redstack.dev/?p=3752 <![CDATA[Hi again! I am starting a series of posts about writing Spring Boot microservice applications with the Oracle Database, I plan to cover topics like databsae access, messaging, external configuration, service discovery, fault tolerance, workflow, observability and so on. But … Continue reading ]]> <![CDATA[

Hi again! I am starting a series of posts about writing Spring Boot microservice applications with the Oracle Database, I plan to cover topics like databsae access, messaging, external configuration, service discovery, fault tolerance, workflow, observability and so on. But before I get started, I wanted to document how I set up my development environment.

Personally, I work on Windows 11 with the Windows Subsystem for Linux and Ubuntu 20.04. Of course you can adjust these instructions to work on macOS or Linux.

Java

First thing we need is the Java Development Kit. I used Java 17, here’s a permalink to download the latest tar for x64: https://download.oracle.com/java/17/latest/jdk-17_linux-x64_bin.tar.gz

You can just decompress that in your home directory and then add it to your path:

export JAVA_HOME=$HOME/jdk-17.0.3
export PATH=$JAVA_HOME/bin:$PATH

You can verify it is installed with this command:

$ java -version
java version "17.0.3" 2022-04-19 LTS
Java(TM) SE Runtime Environment (build 17.0.3+8-LTS-111)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.3+8-LTS-111, mixed mode, sharing)

Great! Now, let’s move on to build automation.

Maven

You can use Maven or Gradle to build Spring Boot projects, and when you generate a new project from Spring Initialzr (more on that later) it will give you a choice of these two. Personally, I prefer Maven, so that’s what I document here. If you prefer Gradle, I’m pretty sure you’ll already know how to set it up 🙂

I use Maven 3.8.6, which you can download from the Apache Maven website in various formats. Here’s a direct link for the zip file: https://dlcdn.apache.org/maven/maven-3/3.8.6/binaries/apache-maven-3.8.6-bin.tar.gz

You can also just decompress this in your home directory and add it to your path:

export PATH=$HOME/apache-maven-3.8.6/bin:$PATH

You can verify it is installed with this command:

$ mvn -v
Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63)
Maven home: /home/mark/apache-maven-3.8.6
Java version: 17.0.3, vendor: Oracle Corporation, runtime: /home/mark/jdk-17.0.3
Default locale: en, platform encoding: UTF-8
OS name: "linux", version: "5.10.102.1-microsoft-standard-wsl2", arch: "amd64", family: "unix"

Ok, now we are going to need an IDE!

Visual Studio Code

These days I find I am using Visual Studio Code for most of my coding. It’s free, lightweight, has a lot of plugins, and is well supported. Of course, you can use a different IDE if you prefer.

Another great feature of Visual Studio Code that I really like is the support for “remote coding.” This lets you run Visual Studio Code itself on Windows but it connects to a remote Linux machine and that’s where the actual code is stored, built, run, etc. This could be an SSH connection, or it could be connecting to a WSL2 “VM” on your machine. This latter option is what I do most often. So I get a nice friendly, well-behaved native desktop applciation, but I am coding on Linux. Kind of the best of both worlds!

You can download Visual Studio Code from its website and install it.

I use a few extensions (plugins) that you will probably want to get too! These add support for the languages and frameworks and give you things like completion and syntax checking and so on:

You can install these by opening the extensions tab (Ctrl-Shift-X) and using the search bar at the top to find and install them.

Containers and Kubernetes

Since our microservices applications are probably almost certainly going to end up running in Kubernetes, its a good idea to have a local test environment. I like to use “docker compose” for initial testing locally and then move to Kubernetes later.

I use Rancher Desktop for both containers and Kubernetes on my laptop. There are other options if you prefer to use something different.

Oracle Database

And last, but not least, you will need the Oracle Database container image so we can run a local database to test against. If you don’t already have it, you will need to go to Oracle Container Registry first, and navigate to “Database,” then “Enterprise,” and accept the license agreement, then pull the image with these commands:

docker login container-registry.oracle.com -u [email protected]
docker pull container-registry.oracle.com/database/enterprise:21.3.0.0

Then you can start a database with this command:

docker run -d \
   --name oracle-db \
   -p 1521:1521 \
   -e ORACLE_PWD=Welcome123 \
   -e ORACLE_SID=ORCL \
   -e ORACLE_PDB=PDB1 \
   container-registry.oracle.com/database/enterprise:21.3.0.0

The first time yoiu start it up, it will create a database instacne for you. This takes a few minutes, you can watch the logs to see when it is done:

docker logs -f oracle-db

You will see this message in the logs when it is ready:

#########################
DATABASE IS READY TO USE!
#########################

You can then stop and start the database container as needed – you won’t need to wait for it to create the database instance each time, it will stop and start in just a second or two.

docker stop oracle-db
docker start oracle-db

You are going to want to grab its IP address for later on, you can do that with this command:

docker inspect oracle-db | grep IPAddress

This container image has SQL*Plus in it, and you can use that as a database command line client, but I prefer the new Oracle SQLcl which is a lot nicer – it has completion and arrow key navigation and lots of other cool new features. Here’s a permalink for the latest version: https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip

You can just unzip this and add it to your path too, like Maven and Java.

You can connect to the database using SQLcl like this (use the IP address you got above):

sql sys/Welcome123@//172.12.0.2:1521/pdb1 as sysdba

Well, that’s about everything we need! In the next post we’ll get started building a Spring Boot microservice!

]]>
https://redstack.dev/2022/12/08/development-environment-setup-for-spring-boot-with-oracle/feed/ 1 3752
Oracle REST Data Services 22.3 brings new REST APIs for Transactional Event Queueing https://redstack.dev/2022/10/31/oracle-rest-data-services-22-3-brings-new-rest-apis-for-transactional-event-queueing/ https://redstack.dev/2022/10/31/oracle-rest-data-services-22-3-brings-new-rest-apis-for-transactional-event-queueing/#comments <![CDATA[Mark Nelson]]> Mon, 31 Oct 2022 16:01:11 +0000 <![CDATA[Uncategorized]]> <![CDATA[oracle]]> <![CDATA[ORDS]]> <![CDATA[REST]]> <![CDATA[TEQ]]> <![CDATA[txeventq]]> https://redstack.dev/?p=3718 <![CDATA[Oracle REST Data Services 22.3 was released a couple of weeks ago, and it is now available on Oracle Autonomous Database as well! This release has a slew of new REST APIs for Oracle Transactional Event Queueing (or TxEventQ). If … Continue reading ]]> <![CDATA[

Oracle REST Data Services 22.3 was released a couple of weeks ago, and it is now available on Oracle Autonomous Database as well! This release has a slew of new REST APIs for Oracle Transactional Event Queueing (or TxEventQ). If you have not heard of it, TxEventQ is essentially a new, faster implementation of Oracle Advanced Queueing which has been in the database for around twenty years.

Many of these new REST APIs are very simliar to the Kafka REST APIs, since TxEventQ provides Kafka compatibility as one of its features.

In this post, I want to show you how to use a few of the APIs, and then I’ll give you an idea of what kinds of APIs are available and where to find more information.

The first thing to do is the grab an Autonomous Database instance. It’s available in the Always Free tier, so you can try this for free! If you are not familiar with creating one, and accessing SQL and so on – check out this free LiveLab for details.

Make sure you get a 21c database – you may need to toggle the “Always Free” option to see 21c. The APIs described in this post are supported in Oracle Database 21c (and later).

When you get into your SQL worksheet, grab the URL from the browser, it will be something like this:

https://xyzabc-red1.adb.us-phoenix-1.oraclecloudapps.com/ords/admin/_sdw/?nav=worksheet

Now, chop off the end of the URL and replace it with the base URL for the TxEventQ REST APIs, and save that in an environment variable to save us some typing!

export ADDR="https://xyzabc-red1.adb.us-phoenix-1.oraclecloudapps.com/ords/admin/_/db-api/stable/database/teq"

And let’s create another environment variable with the authentication details. You can encode them using base64 like this, assuming your userid is admin and your passsword is your_password:

$ echo -n "admin:your_password" | base64
YWRtaW46eW91cl9wYXNzd29yZA==

Then we can use that value to create the authentication header details:

export AUTH="Authorization: Basic YWRtaW46eW91cl9wYXNzd29yZA=="

Great, that will save us from typing those each time!

Create a topic

Let’s start with by creating a topic. We are going to need to know the database name for this – you can find that by running this query in your SQL worksheet:

select sys_context('userenv','db_name') from dual

You’ll need to put that database name into the URL below after “clusters” and before “topics”, in this example my database name is “XYZABC_RED1“:

curl -X POST -H "$AUTH" -H "Content-Type: application/json" -d '{"topic_name": "mark1", "partitions_count": "6"}' "$ADDR/clusters/XYZABC_RED1/topics/"

In the body we specified the name of the topic (“mark1” in this case) and how many parititions we want the topic to have. When you run this request, you’ll see output something like this:

{"name":"MARK1","partitions":[{"partition":0,"leader":1,"replicas":1}]}

It created our topic for us!

List topics

Let’s list the topics now, try this request:

curl -X GET -H "$AUTH" "$ADDR/topics/"

The output will be a JSON list of topic names, like this. You might want to create a few more to make it more interesting!:

["MARK1"]

Get a topic

We can also get details of a single topic like this, the topic name is in the last part of the URL:

curl -X GET -H "$AUTH" "$ADDR/topics/mark1/"

The output looks like this:

{"name":"MARK1","partitions":[{"partition":0,"leader":1,"replicas":1}]}

Create a consumer group

Now let’s create a consumer group, here’s the request, notice the topic name is in the body, and the name of the consumer group is the last part of the URL (“sub1” in this case):

curl -X POST -H "$AUTH" \
     -H "Content-Type: application/json" \
     -d '{"topic_name": "mark1"}' \
     "$ADDR/clusters/XYZABC_RED1/consumer-groups/sub1/"

The output from this is empty (unless you specify verbose, or get an error), but we can easily check the result in the database by running this query:

select * from user_queue_subscribers

Publish messages

Ok, I think we’re ready to publish a message! Here’s the request:

curl -X POST -H "$AUTH" \
     -H "Content-Type: application/json" \
     -d '{"records": [{"key":1,"value":"bob"}]}' \
     "$ADDR/topics/mark1/"

The output will look something like this:

{"Offsets":[{"partition":0,"offset":0}]}

You can put mutliple records in the body to send put more than one message on the topic.

Consume messages

Now, let’s consume the messages off that topic with our consumer sub1. Here’s the request, notice the topic name is in the body, and the soncumer name is in the URL after “consumers”:

curl -X GET -H "$AUTH" \
     -H "Content-Type: application/json" \
     -d '{ "partitions": [ { "topic": "mark1", "partition": 0 } ] }' \
     "$ADDR/consumers/sub1/instances/1/records"

The output from this one (not surprisingly) looks like this:

[{"topic":"MARK1","key":"1","value":"bob","partition":0,"offset":0}]

Great, hopefully that gives you a feel for how these REST APIs for TxEventQ work!

But wait, there’s more!

Of course there are a lot more APIs available than the few I have shown you so far. They all follow a fairly similar pattern, let’s take a look at a list of what’s available:

Topics APIs

  • Create topic, optionally with partition count
  • List topics
  • Get a topic
  • Create a consumer group
  • Publish message(s)
  • List topics in a specific cluster
  • Get a topic in a specific cluster
  • Delete a topic

Partitions APIs

  • List paritions in a topic
  • Get details of one partition in a topic
  • Get partition message offsets
  • List partitions in a topic in a cluster
  • Get details of one partition in a topic in a cluster
  • List consumer lags for a partition
  • Publish message(s) in a particular partition

Consumer Group APIs

  • List consumer groups
  • Details of one consumer group
  • Get consumer group lag summary
  • Get consumer group lags for all partitions
  • Get consumer group lags for a given partition
  • Delete consumer groupworking
  • Create consumer group

Consumer APIs

  • Create consumer instance on consumer group
  • Delete consumer instance on consumer group
  • List topics that a consumer is subscribed to
  • Subscribe to topics
  • Unsubscribe from topics
  • Send message to given partition

Move Offset APIs

  • Fetch messages
  • Fetch messages from offset in specified partition
  • Move to beginning of partition
  • Move to end of partition
  • Get offsets in specific topics and paritions
  • Commit (ser) offsetes in specific partitions

Where to find more information

You can find more informaiton about the TxEventQ REST APIs in the documentation, here: https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/22.3/orrst/api-oracle-transactional-event-queues.html

Or if you prefer, you can open the OpenAPI specification on your database instance. The URL will be something like this, and you can search the output for “teq” to find the APIs:

https://xyzabc-red1.adb.us-phoenix-1.oraclecloudapps.com/ords/admin/_/db-api/stable/metadata-catalog/openapi.json

I hope you enjoyed this quick introduction to the new REST APIs for Transactional Event Queueing! Of course, this is available in any Oracle database, not just Autonomous Database. If you want to use Oracle REST Data Services with your own database, you might find this post about installing a standalone version interesting too!

]]>
https://redstack.dev/2022/10/31/oracle-rest-data-services-22-3-brings-new-rest-apis-for-transactional-event-queueing/feed/ 3 3718
Getting started with the new observability exporter for Oracle database https://redstack.dev/2022/09/27/getting-started-with-the-new-observability-exporter-for-oracle-database/ https://redstack.dev/2022/09/27/getting-started-with-the-new-observability-exporter-for-oracle-database/#respond <![CDATA[Mark Nelson]]> Tue, 27 Sep 2022 16:05:41 +0000 <![CDATA[Uncategorized]]> <![CDATA[database]]> <![CDATA[metrics]]> <![CDATA[observability]]> <![CDATA[oracle]]> <![CDATA[Prometheus]]> https://redstack.dev/?p=3681 <![CDATA[My colleague Paul Parkinson recently published our new unified obserability exporter for Oracle Database on GitHub, you can read about it here. I wanted to start playing around with it to see what we can do with it. In this … Continue reading ]]> <![CDATA[

My colleague Paul Parkinson recently published our new unified obserability exporter for Oracle Database on GitHub, you can read about it here. I wanted to start playing around with it to see what we can do with it.

In this post I will start with a really simple example that just gets the exporter up and running and collects a few simple metrics from the database into Prometheus. In subsequent posts, I’ll go further and look at dashboards in Grafana, and also cover the logging and metrics capabilities! But you have to start somewhere right!

First thing we need is a database of course! I just fired one up in a container like this:

docker run -d \
       --name oracle-db \
       -p 1521:1521 \
       -e ORACLE_PWD=Welcome123 \
       -e ORACLE_SID=ORCL \
       -e ORACLE_PDB=PDB1 \
       container-registry.oracle.com/database/enterprise:21.3.0.0

If you have not used this image before, you will first need to go to Oracle Container Registry at https://container-registry.oracle.com, log in, and navigate to the Database category and then the “enterprise” image and accept the license agreement. You will also need to login your docker client so you can pull the image:

docker login container-registry.oracle.com
# this will prompt you for your username and password

The image will take a short time to pull the first time, and the first startup will actually create the database instance, and that takes a few minutes too. You can watch the logs to see when the database is ready:

docker logs -f oracle-db

You only need to have these delays the first time you start the image. After that you can stop and start the container as needed and it will retain the data and startup very quickly.

# to stop the container:
docker stop oracle-db

# to start the container:
docker start oracle-db

Ok, so now we have a database available. Let’s connect to it and create some data to play with. You can use your favorite client – there’s SQL*Plus in that image if you don’t have anything else available. You can start it and connect to the database like this:

docker exec -ti oracle-db sqlplus pdbadmin/Welcome123@//localhost:1521/pdb1

Note: If you have not already, you might want to check out the new SQLcl command line tool which features command line completion and many other great features – check it out at https://www.oracle.com/database/sqldeveloper/technologies/sqlcl/

Let’s create a “customer” table and add a record:

create table customer (id number, name varchar2(256));
insert into customer (id, name) values (1, 'mark');
commit;

Great, and let’s just leave that session connected – that will come in handy later!

Now, let’s get the observability exporter and set it up.

First, you’ll need to clone the project from GitHub:

git clone https://github.com/oracle/oracle-db-appdev-monitoring
cd oracle-db-appdev-monitoring

You can build the project and create a container image (assuming you have Maven, Java and Docker installed) like this:

mvn clean package -DskipTests
docker build -t observability-exporter:0.1.0 .

If you don’t have those installed and you don’t want to – you can skip this step and just grab a pre-built container image from Oracle Container Registry:

docker pull container-registry.oracle.com/database/observability-exporter:0.1.0	

If you do it this way, make sure to use the full name later when we start the exporter, not the short version!

Now we need to create a configuration file and define our metrics. I called mine mark-metrics.toml and here’s the content:

[[metric]]
context = "customers"
request = "SELECT count(*) as num_custs FROM customer"
metricsdesc = { num_custs = "Number of customers." }

[[metric]]
context = "system"
request = "select count(*) as session_count from v$session where username is not null and type = 'USER' and con_id = sys_context('userenv','con_id')"
metricsdesc = { session_count = "Current session count." }

[[metric]]
context = "system"
request = "select count(*) as active_sessions from v$session where username is not null and type = 'USER' and status = 'ACTIVE' and con_id = sys_context('userenv','con_id')"
metricsdesc = { active_sessions = "Active sessions." }

[[metric]]
context = "system"
request = "select (c.session_count - a.active_sessions) as inactive_sessions from (select count(*) as session_count from v$session where username is not null and type = 'USER' and con_id = sys_context('userenv','con_id')) c, (select count(*) as active_sessions from v$session where username is not null and type = 'USER' and status = 'ACTIVE' and con_id = sys_context('userenv','con_id')) a"
metricsdesc = { inactive_sessions = "Inactive sessions." }

[[metric]]
context = "system"
request = "select b.session_count as blocked_sessions from (select count(*) as session_count from v$session where username is not null and type = 'USER' and blocking_session_status = 'VALID' and con_id = sys_context('userenv','con_id')) b"
metricsdesc = { blocked_sessions = "Blocked sessions." }

I defined five metrics in this file. Each metric starts with the [[metric]] heading and can have several fields. You can see more information in the documentation here. In the spirit of keeping this first post simple, I just created basic metrics with no labels or anything fancy 🙂

Let’s take a close look at the first metric, here it is again:

[[metric]]
context = "customers"
request = "SELECT count(*) as num_custs FROM customer"
metricsdesc = { num_custs = "Number of customers." }

It is in the context (or group) called customers. The metric itself is called num_custs. You can see how we use the metricsdesc to create a human-readable documentation/description for the metric. And the metric itself is defined with an SQL statement. Wow! That’s pretty cool, right? That means that anything I can write an SQL statement to get from the database can be exported as a metric! In this one I just count the number of entries in that customer table we just created.

The other four metrics are some simple queries that get the number of sessions in the database as well as how many are active, inactive and blocked. These are all in the system context. You can define whatever contexts you like.

When you later look at a metric in Prometheus its name will be something like this:

oracledb_customers_num_custs

Notice how the context (customers) and the metric name (num_custs) are in there.

Ok, now that we have defined our metrics, we can start up the exporter. Let’s run it in another container, alongside the database. We can start it like this:

docker run -d \
       -v /home/mark/oracle-db-appdev-monitoring/mark-metrics.toml:/metrics.toml \
       -p 9161:9161 \
       -e DEFAULT_METRICS=/metrics.toml \
       -e DATA_SOURCE_NAME=pdbadmin/[email protected]:1521/pdb1 \
       --name exporter \
       observability-exporter:0.1.0

There’s a couple of things to note here. First, I am providing the configuration file we just created using the -v mount. This will give the exporter access to the metrics definitions. Second, we need to tell it how to connect to the database. You’ll need to get the IP address of the database container using this command:

docker inspect oracle-db | grep IPAddress

Yours will probably be diffrent to mine, so you’ll need to update the value of DATA_SOURCE_NAME to match your environment. And finally, a reminder – if you pulled the pre-built image down from Oracle Container Registry, you’ll need to use the fully qualified name on the last line.

Once this container starts up, grab its IP address too, we’ll need that in a minute:

docker inspect exporter | grep IPAddress

The exporter should start right up, and assuming we got the address right and no typos, it should be working and we can get metrics like this:

$ curl localhost:9161/metrics
# HELP oracledb_system_inactive_sessions Inactive sessions.
# TYPE oracledb_system_inactive_sessions gauge
oracledb_system_inactive_sessions 1.0
# HELP oracledb_up Whether the Oracle database server is up.
# TYPE oracledb_up gauge
oracledb_up 1.0
# HELP oracledb_system_blocked_sessions Blocked sessions.
# TYPE oracledb_system_blocked_sessions gauge
oracledb_system_blocked_sessions 0.0
# HELP oracledb_customers_num_custs Number of customers.
# TYPE oracledb_customers_num_custs gauge
oracledb_customers_num_custs 2.0
# HELP oracledb_system_active_sessions Active sessions.
# TYPE oracledb_system_active_sessions gauge
oracledb_system_active_sessions 1.0
# HELP oracledb_system_session_count Current session count.
# TYPE oracledb_system_session_count gauge
oracledb_system_session_count 2.0

If you don’t see this, check the container logs to see what the error was:

docker logs exporter

Assuming everything is working now, let’s start up Prometheus and configure it to scrape these metrics.

First, let’s create a configuration file called prometheus.yml with this content:

global:
  scrape_interval:     10s
  evaluation_interval: 10s

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['127.0.0.1:9090']

  - job_name: 'oracle-exporter'
    metrics_path: '/metrics'
    scrape_interval: 10s
    scrape_timeout: 8s
    static_configs:
    - targets: ['172.17.0.4:9161']

The only thing you’ll need to change here is the very last line – you need to put the IP address of your exporter container in there.

Then you can start up Prometheus using this configuration like this:

docker run -d \
       --name prometheus \
       -p 9090:9090 \
       -v /home/mark/prometheus.yml:/etc/prometheus/prometheus.yml \
       prom/prometheus --config.file=/etc/prometheus/prometheus.yml

It should start right up and you can access it at http://localhost:9090

The user interface looks like this, and you can type into that search field to find a metric. If you start typing “num_custs” it should find our metric. Then hit enter, or click on the Execute button to see the value of the metric. It might take up to 10 seconds for data to be available, since we configured the scrape interval as 10 seconds in our configuration file. You should see something like this – yours will probably say 1, not 2:

If you go insert some more records into that table and then check again, you’ll see the value is updated. You can also click on the Graph tab to view that as a time series graph. Try adding and removing records to see what happens. Remember to wait a little while between each update so that new metrics are collected.

You can also try the other metrics we created! So there we go, that’s covered the very basic starting steps of defining some metrics, running the exporter and scraping the metrics into Prometheus! Stay tuned for some follow up posts where I will build dashboards in Grafana, and also look at exporting logs and distributed tracing!

Bonus info: If you use WSL2 like I do, you might see a warning on the Prometheus web user interface about clock skew. If you do, you can fix that by updating the time in WSL like this:

sudo hwclock -s
]]>
https://redstack.dev/2022/09/27/getting-started-with-the-new-observability-exporter-for-oracle-database/feed/ 0 3681
New web page for Oracle Transactional Event Queueing https://redstack.dev/2022/08/05/new-web-page-for-oracle-transactional-event-queueing/ https://redstack.dev/2022/08/05/new-web-page-for-oracle-transactional-event-queueing/#respond <![CDATA[Mark Nelson]]> Fri, 05 Aug 2022 17:25:29 +0000 <![CDATA[Uncategorized]]> <![CDATA[TEQ]]> https://redstack.dev/?p=3678 <![CDATA[The new web page for Oracle Transactional Event Queueing is live and has lots of great information including sample code, links to hands-on labs, documentation and some user stories! Hope you can check it out!]]> <![CDATA[

The new web page for Oracle Transactional Event Queueing is live and has lots of great information including sample code, links to hands-on labs, documentation and some user stories! Hope you can check it out!

]]>
https://redstack.dev/2022/08/05/new-web-page-for-oracle-transactional-event-queueing/feed/ 0 3678
Creating a stored procedure (dare I call it a microservice?) to automatically process events on a queue https://redstack.dev/2022/06/23/creating-a-stored-procedure-dare-i-call-it-a-microservice-to-automatically-process-events-on-a-queue/ https://redstack.dev/2022/06/23/creating-a-stored-procedure-dare-i-call-it-a-microservice-to-automatically-process-events-on-a-queue/#respond <![CDATA[Mark Nelson]]> Thu, 23 Jun 2022 17:14:43 +0000 <![CDATA[Uncategorized]]> <![CDATA[microservice]]> <![CDATA[Notification]]> <![CDATA[Subscriber]]> <![CDATA[TEQ]]> https://redstack.dev/?p=3650 <![CDATA[In this post I want to look at how to create a stored procedure in the database to automatically process events as they are produced on a Transactional Event Queue (TEQ). Having a small, discrete piece of code that processes … Continue reading ]]> <![CDATA[

In this post I want to look at how to create a stored procedure in the database to automatically process events as they are produced on a Transactional Event Queue (TEQ).

Having a small, discrete piece of code that processes events off a queue is a pretty common use case. You could even call it a microservice I guess 🙂 since it does meet the well-established criteria of having its own code base, being loosely coupled, independently deployable and testable. One thing I find really interesting about writing a “microservice” like this and deploying it in the Oracle Database is that I can essentially scale it to zero instances, and it will only run (and consume resources) when there is actually work for it to do. I could also use the Database Resource Manager to control how many resources it is able to consume if I wanted to 🙂 And it would not be all that hard to instrument it so I could get logs, metrics and even distributed tracing – but that’s another story!

So, let’s go ahead and build this thing!

We’ll start with a new Oracle Database. I am going to run it in a Docker container using the standard image provided on Oracle Container Registry. If you have not used it before, you will need to login to Oracle Container Registry at https://container-registry.oracle.com and then navigate to the “Database” section, and then “enterprise” and read and accept the license.

Make sure you are logged into Oracle Container Registry in your Docker client too:

docker login container-registry.oracle.com -u [email protected]

Once you have authenticated, you can start up an Oracle 21c Database using this command:

docker run -d \
  --name oracle-db \
  -p 1521:1521 \
  -e ORACLE_PWD=Welcome123## \
  -e ORACLE_SID=ORCL \
  -e ORACLE_PDB=PDB1 \
  container-registry.oracle.com/database/enterprise:21.3.0.0

It will take a few minutes (the first time only) for the database files to be created and the instance to start up. You can watch the logs, or use this little shell command to do the waiting for you:

while ! docker logs oracle-db | grep -q "DATABASE IS READY TO USE!";
do 
  sleep 10
done

Great, now we have a database running, let’s set up the necessary permissions for our user. I am going to use the pdbadmin user in the PDB1 pluggable database. So let’s give that user permissions to use the TEQ packages (I am using the new SQLcl command line tool, but you can use SQL*Plus or SQL Developer, or whatever tool you prefer):

# sql sys/Welcome123##@//localhost:1521/pdb1 as sysdba

SQL> alter session set container = pdb1;

SQL> grant dba to pdbadmin;

SQL> grant execute on dbms_aqadm to pdbadmin;
SQL> grant execute on dbms_aq to pdbadmin;

SQL> commit;
SQL> exit

Ok, now we can connect with our pdbadmin user and start setting up our environment:

# sql pdbadmin/Welcome123##@//localhost:1521/pdb1

First we want to create our (TEQ) queue (or topic) and start it. We’ll call the queue my_teq:

begin
    dbms_aqadm.create_transactional_event_queue(
        queue_name         => 'my_teq',
        multiple_consumers => true
    );
    
    dbms_aqadm.start_queue(
        queue_name         => 'my_teq'
    ); 
end;
/

Since we are using a multi-consumer queue (i.e. a topic) we need to add a subscriber too. Let’s call it my_subscriber:

declare
    subscriber sys.aq$_agent;
begin
    dbms_aqadm.add_subscriber(
        queue_name => 'my_teq',
        subscriber => sys.aq$_agent('my_subscriber', null, 0)
    );
end;
/

We’ll keep our microservice super-simple for this demonstration, we’ll just have it record the messages it receives in an “output” table – so let’s create that table now:

create table my_log (
    message varchar(256),
    when timestamp(6)
);

Ok, so here is our consumer microservice:

create or replace procedure receiver (
    context in raw,
    reginfo in sys.aq$_reg_info,
    descr in sys.aq$_descriptor,
    payload in varchar2,
    payloadl in number
) as
  dequeue_options dbms_aq.dequeue_options_t;
  message_properties dbms_aq.message_properties_t;
  message_handle raw ( 16 ) ;
  message sys.aq$_jms_text_message;
  no_messages exception;
  pragma exception_init ( no_messages, -25228 ) ;
begin
  dequeue_options.msgid := descr.msg_id;
  dequeue_options.consumer_name := descr.consumer_name;
  dequeue_options.navigation := dbms_aq.first_message;
  loop
    dbms_aq.dequeue (
      queue_name => 'my_teq',
      dequeue_options => dequeue_options,
      message_properties => message_properties,
      payload => message,
      msgid => message_handle
    );
    insert into my_log values ( message.text_vc, sysdate );
    commit;
  end loop;
exception
when no_messages then
  dbms_output.put_line ( 'No more messages for processing' ) ;
  commit;
end;
/

Let’s walk through that and talk about the details. First, the procedure must have this signature/interface:

procedure receiver (
    context in raw,
    reginfo in sys.aq$_reg_info,
    descr in sys.aq$_descriptor,
    payload in varchar2,
    payloadl in number
)

The name of the procedure is up to you, but it must have those exact parameters in that order, since this is a callback, and the TEQ notification is expecting this signature so that it can pass the data about new messages to the consumer.

When we get the callback, we need to perform a dequeue operation to get the actual message/event off the queue/topic. Since it is possible that there is more than one, its a good idea to use a loop to read and process all of them before we complete. Here we have a simple loop to dequeue a message and then save the details in our log table:

  loop
    dbms_aq.dequeue (
      queue_name => 'my_teq',
      dequeue_options => dequeue_options,
      message_properties => message_properties,
      payload => message,
      msgid => message_handle
    );
    insert into my_log values ( message.text_vc, sysdate );
    commit;
  end loop;

We’ve also defined an exception handler for when there are no messages available (though this should not ever happen, but its still a good practice to cater for it anyway):

when no_messages then
  dbms_output.put_line ( 'No more messages for processing' ) ;
  commit;

I used the JMS message format in this example, but of course you could use RAW or JSON or a user-defined type instead.

Ok, so now that our microservice is ready, we need to tell the database to call it when there is a message to process. To do this, we create a notification as follows:

begin
  dbms_aq.register(
      sys.aq$_reg_info_list(
        sys.aq$_reg_info(
            'my_teq:my_subscriber',
            dbms_aq.namespace_aq,
            'plsql://receiver', 
            HEXTORAW('FF')
        )
      ), 1);
  commit;
end;

Ok, so let’s talk about what is happening here. This register function that we are running will set up the connection between the queue, the subscriber and the consumer. In the aq$_reg_info you can see the first parameter has the queue name followed by a colon and the subscriber name – so this is telling us “when we have a message on my_teq and it is addressed to the subscriber my_subscriber…”

The next parameter tells us that we are interested in AQ (and TEQ) notifications, and the third parameter tells us the callback address. In this case we are telling it to run the PL/SQL procedure called receiver.

Once that is done, you can check on the details with this query:

select r.reg_id, subscription_name, location_name, num_ntfns, num_pending_ntfns
from USER_SUBSCR_REGISTRATIONS r, V$SUBSCR_REGISTRATION_STATS s
where r.reg_id = s.reg_id;

REG_ID                      SUBSCRIPTION_NAME    LOCATION_NAME NUM_NTFNS NUM_PENDING_NTFNS
______ ______________________________________ ________________ _________ _________________
   301 "PDBADMIN"."MY_TEQ":"MY_SUBSCRIBER"    plsql://receiver        40                 0

If you come back and run this again later, you will see the number of notifications sent, and pending (i.e. the last two columns) will increase each time we send a message.

Ok, let’s enqueue a message (publish an event) to test this out!

We can use this command to send a test message. This creates and sends a JMS message on our my_teq queue/topic addressed to our my_subscriber consumer:

declare
  enqueue_options dbms_aq.enqueue_options_t;
  message_properties dbms_aq.message_properties_t;
  message_handle raw(16);
  message sys.aq$_jms_text_message;
begin
  message := sys.aq$_jms_text_message.construct;
  message.set_text('hello from mark');
  message_properties.recipient_list(0) := sys.aq$_agent('my_subscriber', null, null);
  dbms_aq.enqueue(
    queue_name => 'my_teq',
    enqueue_options => enqueue_options,
    message_properties => message_properties,
    payload => message,
    msgid => message_handle);
  commit;
end;
/

Once that is run, the notification will kick in, and the callback will occur and our microservice will run, and consume all of the messages and dump them out into our “output table.” We can check the results with this query:

SQL> select * from my_log;

           MESSAGE                               WHEN
__________________ __________________________________
hello from mark    23-JUN-22 04.44.06.000000000 PM

Feel free to go run that a few more times to see what happens.

So there we go! We created a nice simple, loosely coupled consumer that will process messages/events as they arrive, and will scale to zero (consume no resources) when there is no work to do. Enjoy!

]]>
https://redstack.dev/2022/06/23/creating-a-stored-procedure-dare-i-call-it-a-microservice-to-automatically-process-events-on-a-queue/feed/ 0 3650
Java to get go-routine-like virtual threads! https://redstack.dev/2022/06/23/java-to-get-go-routine-like-virtual-threads/ https://redstack.dev/2022/06/23/java-to-get-go-routine-like-virtual-threads/#respond <![CDATA[Mark Nelson]]> Thu, 23 Jun 2022 14:44:50 +0000 <![CDATA[Uncategorized]]> <![CDATA[Java]]> <![CDATA[JVM]]> <![CDATA[thread]]> https://redstack.dev/?p=3646 <![CDATA[Yay! Java is finally going to get some lightweight threads, a bit like go-routines, which allow you to create threads in the JVM without each one consuming an OS thread! I’m looking forward to trying it out in Java 19. … Continue reading ]]> <![CDATA[

Yay! Java is finally going to get some lightweight threads, a bit like go-routines, which allow you to create threads in the JVM without each one consuming an OS thread!

I’m looking forward to trying it out in Java 19.

Read all the details in this article by Nicolai Parloghttps://blogs.oracle.com/javamagazine/post/java-loom-virtual-threads-platform-threads

]]>
https://redstack.dev/2022/06/23/java-to-get-go-routine-like-virtual-threads/feed/ 0 3646
Some big updates for the Python Oracle library (cx_Oracle) https://redstack.dev/2022/05/26/some-big-updates-for-the-python-oracle-library-cx_oracle/ https://redstack.dev/2022/05/26/some-big-updates-for-the-python-oracle-library-cx_oracle/#respond <![CDATA[Mark Nelson]]> Thu, 26 May 2022 13:46:58 +0000 <![CDATA[Uncategorized]]> <![CDATA[oracle]]> <![CDATA[python]]> https://redstack.dev/?p=3642 <![CDATA[There are some really interesting updates for the open source Python Oracle library (known as cx_oracle, but changing its name is part of this) – check it out here: https://cjones-oracle.medium.com/open-source-python-thin-driver-for-oracle-database-e82aac7ecf5a]]> <![CDATA[

There are some really interesting updates for the open source Python Oracle library (known as cx_oracle, but changing its name is part of this) – check it out here:

https://cjones-oracle.medium.com/open-source-python-thin-driver-for-oracle-database-e82aac7ecf5a

]]>
https://redstack.dev/2022/05/26/some-big-updates-for-the-python-oracle-library-cx_oracle/feed/ 0 3642
Cross-region event propagation with Oracle Transactional Event Queues https://redstack.dev/2022/05/24/cross-region-event-propagation-with-oracle-transactional-event-queues/ https://redstack.dev/2022/05/24/cross-region-event-propagation-with-oracle-transactional-event-queues/#comments <![CDATA[Mark Nelson]]> Tue, 24 May 2022 16:49:56 +0000 <![CDATA[Uncategorized]]> <![CDATA[JMS]]> <![CDATA[OCI]]> <![CDATA[propagation]]> <![CDATA[TEQ]]> https://redstack.dev/?p=3527 <![CDATA[In this post I want to demonstrate how to use Oracle Transactional Event Queues (TEQ) to propagate messages/events across regions. I will use two Oracle Autonomous Databases running in Oracle Cloud, one in Ashburn, VA and one in Phoenix, AZ … Continue reading ]]> <![CDATA[

In this post I want to demonstrate how to use Oracle Transactional Event Queues (TEQ) to propagate messages/events across regions. I will use two Oracle Autonomous Databases running in Oracle Cloud, one in Ashburn, VA and one in Phoenix, AZ (about 2,000 miles apart).

Of course, there are a lot of reasons why you might want to propagate events like this, and you don’t necessarily have to do it across geographic regions, you might just want to do it across two database instances in the same data center, or even just two topics in the same database!

Here’s a quick diagram of what we are going to build. We are going to use the JMS Pub/Sub model. Our producer will connect to the ASHPROD1 instance and put messages onto the topic ASH_TOPIC. Messages will be propagated from this topic to the topic PHX_TOPIC in the PHXPROD1 instance. Our consumer will connect to PHXPROD1 and consume messages from there.

To get started, let’s create two databases. To follow along, you’ll need an Oracle Cloud account – you can do this with the “Always Free” account using the 21c version of the Autonomous Database, so you can try this without spending any money 🙂 You can also use 19c if you prefer.

Creating the databases

First we log into the Oracle Cloud Infrastructure (OCI) Console at https://cloud.oracle.com. Enter your cloud account name and hit the “Next” button.

After you log in, click on the “hamburger” (three lines) menu (1) and go to “Oracle Database” (2) and then “Autonomous Database” (3) as shown:

Choose your compartment (1), and the region (2) (I’ll use Ashburn and Phoenix – use any two regions you like, or two in the same region will work too), then click on the “Create Autonomous Database” (3) button:

In the dialog, we need to give the database a name, I used ASHPROD1. Choose “Transaction Processing” as the workload type and “Shared Infrastructure” as the deployment type:

You can accept the default 19c database (or toggle that “Always Free” switch to use 21c). The default 1 OCPU, 1 TB is fine for this exercise. Also provide a password for the administrator (don’t forget it!):

In the “Choose network access” section, choose the option for secure access and click on the “Add My IP Address” button. Choose the “Bring You Own License (BYOL)” option and provide an email address for the administrator:

Then click on the “Create Autonomous Database” button to create the database.

Now choose the second region, e.g. Phoenix, in the top right corner of the OCI Console and repeat this same process to create a second database, for example called PHXPROD1. This time though, choose the “secure access from anywhere” option, since we are going to need to be able to have ASHPROD1 connect to this instance too.

Obtain Database Wallets

So now we have our two databases. Let’s download the wallets so that we can connect to them. The database wallets contain the necessary information to connect to, and authenticate the database.

In the OCI Console, click on the database name to see the details of the database:

Next, click on the “DB Connection” button:

You will click on the “Download wallet” button (1) to get the wallet file, but while you are here, notice the connection strings (2) – we’ll use one of those later.

After you click on the button, provide a password for the wallet, and then click on the “Download” button:

Repeat this for the other database.

Creating our consumer

Let’s create a new project and write our consumer code. We’ll use Maven to simplify the dependency management and to make it easy to run our consumer. Let’s create a new directory and unzip our two wallets into this directory. So we should see something like this:


/home/mark/src/redstack
├── ASHPROD1
│   ├── README
│   ├── cwallet.sso
│   ├── ewallet.p12
│   ├── ewallet.pem
│   ├── keystore.jks
│   ├── ojdbc.properties
│   ├── sqlnet.ora
│   ├── tnsnames.ora
│   └── truststore.jks
└── PHXPROD1
    ├── README
    ├── cwallet.sso
    ├── ewallet.p12
    ├── ewallet.pem
    ├── keystore.jks
    ├── ojdbc.properties
    ├── sqlnet.ora
    ├── tnsnames.ora
    └── truststore.jks

You will need to edit both of those sqlnet.ora files and update the DIRECTORY so that it is correct, for example, we would change this:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="?/network/admin")))
SSL_SERVER_DN_MATCH=yes

To this;

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA =
  (DIRECTORY="/home/mark/src/redstack/PHXPROD1")))
SSL_SERVER_DN_MATCH=yes

Let’s add a Maven POM file to set up our project. I am assuming you have Maven and a JDK installed. If not – go get those now 🙂 I am using Maven 3.8.4 and Java 17.0.3. Create a file called pom.xml with this content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.wordpress.redstack</groupId>
    <artifactId>propagation</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>propagation</name>
    <description>Demo of TEQ propagation</description>

    <properties>
        <maven.compiler.target>17</maven.compiler.target>
        <maven.compiler.source>17</maven.compiler.source>
    </properties>

    <dependencies>
        <dependency>
            <groupId>javax.transaction</groupId>
            <artifactId>javax.transaction-api</artifactId>
            <version>1.2</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.jdbc</groupId>
            <artifactId>ojdbc8</artifactId>
            <version>19.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.messaging</groupId>
            <artifactId>aqapi</artifactId>
            <version>19.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.security</groupId>
            <artifactId>oraclepki</artifactId>
            <version>19.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.security</groupId>
            <artifactId>osdt_core</artifactId>
            <version>19.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.security</groupId>
            <artifactId>osdt_cert</artifactId>
            <version>19.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>javax.jms</groupId>
            <artifactId>javax.jms-api</artifactId>
            <version>2.0.1</version>
        </dependency>
        <dependency>
            <groupId>javax.transaction</groupId>
            <artifactId>jta</artifactId>
            <version>1.1</version>
        </dependency>
    </dependencies>

    <profiles>
        <profile>
            <id>consumer</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.wordpress.redstack.Consumer</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
        <profile>
            <id>producer</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.wordpress.redstack.Producer</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
    </profiles>
</project>

This defines the Maven coordinates for our project, the dependencies we need to compile and run our code, and also a convenience goal to run the consumer (or producer) directly from Maven so that we don’t have to worry about constructing the class path manually. Let’s also create some directories to store our code:

mkdir -p src/main/java/com/wordpress/redstack
mkdir -p src/main/resources

Now let’s create a Java class in src/main/java/com/wordpress/redstack/Consumer.java with the following content:

package com.wordpress.redstack;

import java.sql.SQLException;

import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;

import oracle.AQ.AQException;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.jms.AQjmsTextMessage;
import oracle.jms.AQjmsTopicSubscriber;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class Consumer {

    private static String username = "admin";
    private static String url = "jdbc:oracle:thin:@phxprod1_high?TNS_ADMIN=/home/mark/src/redstack/PHXPROD1";
    private static String topicName = "phx_topic";

    public static void main(String[] args) throws AQException, SQLException, JMSException {

        // create a topic session
        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setURL(url);
        ds.setUser(username);
        ds.setPassword(System.getenv("DB_PASSWORD"));

        // create a JMS topic connection and session
        TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
        TopicConnection conn = tcf.createTopicConnection();
        conn.start();
        TopicSession session = 
           (AQjmsSession) conn.createSession(true, Session.AUTO_ACKNOWLEDGE);

        // create a subscriber on the topic
        Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
        AQjmsTopicSubscriber subscriber = 
           (AQjmsTopicSubscriber) session.createDurableSubscriber(topic, "BOOK");

        System.out.println("Waiting for messages...");

        // wait forever for messages to arrive and print them out
        while (true) {

            // the 1_000 is a one second timeout
            AQjmsTextMessage message = (AQjmsTextMessage) subscriber.receive(1_000); 
            if (message != null) {
                if (message.getText() != null) {
                    System.out.println(message.getText());
                } else {
                    System.out.println();
                }
            }
            session.commit();
        }
    }

}

Let’s take a look at the interesting parts of that code.

    private static String url = "jdbc:oracle:thin:@phxprod1_high?TNS_ADMIN=/home/mark/src/redstack/PHXPROD1";

This defines the URL that we will use to connect to the database. Notice that it is using an alias (phxprod1_high) – that might look familiar, remember we saw those on the OCI Console when we were downloading the wallet. If you take a look at the tnsnames.ora file in the PHXPROD1 wallet you will see how this is defined, something like this:

phxprod1_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.us-phoenix-1.oraclecloud.com))(connect_data=(service_name=xxx_phxprod1_high.adb.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com, OU=Oracle BMCS US, O=Oracle Corporation, L=Redwood City, ST=California, C=US")))

In our main() method, we start by connecting to the database instance:

        // create a topic session
        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setURL(url);
        ds.setUser(username);
        ds.setPassword(System.getenv("DB_PASSWORD"));

Notice that we are reading the password from an environment variable – so you’ll need to set that variable wherever you are going to run this (note – this is not my real password, just an example):

export DB_PASSWORD=Welcome123##

Next we set up a TopicConnection, start a JMS Session, look up our Topic and create a Subscriber. This is all fairly standard JMS stuff 🙂

        // create a JMS topic connection and session
        TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
        TopicConnection conn = tcf.createTopicConnection();
        conn.start();
        TopicSession session = (AQjmsSession) 
           conn.createSession(true, Session.AUTO_ACKNOWLEDGE);

        // create a subscriber on the topic
        Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
        AQjmsTopicSubscriber subscriber =
           (AQjmsTopicSubscriber) session.createDurableSubscriber(topic, "BOOK");

        System.out.println("Waiting for messages...");

I created a Durable Subscriber and named it BOOK. We’ll see that name again later, remember that!

Finally, we are going to just wait for messages forever and print them out.

        // wait forever for messages to arrive and print them out
        while (true) {
            AQjmsTextMessage message = (AQjmsTextMessage) subscriber.receive(1_000);
            if (message != null) {
                if (message.getText() != null) {
                    System.out.println(message.getText());
                } else {
                    System.out.println();
                }
            }
            session.commit();
        }

Normally, we would not wait forever, and we’d clean up our resources, but since this is just a small example consumer, we’ll make some allowances 🙂

Ok, that takes care of our consumer. We won’t run it yet, since we have not created the topics. Let’s do that now!

Create the topics

We are going to create two topics, one in each database instance/region, and configure propagation between them. Let’s review what we want:

Ashburn (producer side)Phoenix (consumer side)
ASHPROD1 database instancePHXPROD1 database instance
ASH_TOPIC topicPHX_TOPIC topic

Navigate back to your ASHPROD1 Autonomous Database in the OCI Console and click on the “Database Actions” button:

Note that your browser might think this is a pop-up and block it. If so, clicking on the button again usually lets the browser know you really meant to open it 🙂

In the Database Actions page, click on the “SQL” card to open the SQL Worksheet:

If you get the tour, you can click on “Next” or the “X” to close it.

We are just going to create our topics in the ADMIN schema. In real life, you would probably create a new user/schema to keep your topics in, perhaps several so that you can group them for easier administration. You can create topics with Java or PL/SQL. For this example, we will use PL/SQL.

Here’s the commands to create and start our new topic, ASH_TOPIC:

begin
    dbms_aqadm.create_sharded_queue(
        queue_name => 'ash_topic',
        multiple_consumers => TRUE
    ); 
    dbms_aqadm.start_queue('ash_topic');
end;

If you are using 21c, instead of create_sharded_queue, you should use create_transactional_event_queue – that procedure was renamed in 21c.

You can put these commands into the worksheet at the top (1), then click on the “Run Statement” button (2). You will see the result in the “Script Output” window (3) as shown below:

If you want to check, you can run this query to see details of the queues and topics in your schema:

select * from user_queues;

Now, we need to go to our PHXPROD1 database and create the PHX_TOPIC there. Just repeat what you just did for ASHPROD1 on the PHXPROD1 database and remember to change the name of the topic in the commands that you run!

Create the Database Link

Great, our topics are ready to go! Next, we need to create a Database Link from the ASHPROD1 database to the PHXPROD1 database. The Database Link will allow us to perform actions against the remote database, in this case, to enqueue messages on the remote topic.

Since our databases are using TLS, we need to make the remote database (PHXPROD1) wallet available to the ASHPROD1 database, so that it can authenticate. The easiest way to do this is to upload the files we need into an Object Store bucket.

Let’s create the bucket. In the OCI Console, make sure you are in the Ashburn region and then click on the “hamburger” menu (the three lines at the top left), then “Storage” and the “Buckets”:

Then click on the “Create Bucket” button. Give your bucket a name, I used dblinks and click on the “Create” button. All the defaults are fine for what we need:

Notice that your bucket is private:

Click on the “Upload” button to upload a file:

Then click on the “select files” link to choose the file. We need the file called cwallet.sso in the wallet we downloaded for the PHXPROD1 database (the remote database):

Once the upload completes you can close that dialog and then click on the “three dots” (1) next to the file we just uploaded and choose the “Create Pre-Authenticated Request” (2) option:

The defaults are what we want here – we want to be able to read this one object only. If you want to change the expiration to something like 2 days, just to be on the safe side, that’s not a bad idea at all! Click on the “Create Pre-Authenticated Request” button:

Make sure you take a copy of the URL, you won’t be able to get it again!

Ok, now we are ready to create the link. Open the SQL Worksheet for the ASHPROD1 database (the local/source database) and run these commands. You will need to get the right values for several fields before you run this, I’ll tell you where to get them next:

create or replace directory AQ_DBLINK_CREDENTIALS
as 'aq_dblink_credentials';

BEGIN
  DBMS_CLOUD.GET_OBJECT(
    object_uri => 'https://objectstorage.us-ashburn-1.oraclecloud.com/p/xxxx/n/xxxx/b/dblinks/o/cwallet.sso',
    directory_name => 'AQ_DBLINK_CREDENTIALS',
    file_name => 'cwallet.sso');

  DBMS_CLOUD.CREATE_CREDENTIAL(
    credential_name => 'CRED',
    username => 'ADMIN', -- remote db has case-sensitive login enabled, must be uppercase
    password => 'Welcome123##');

  DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
    db_link_name => 'PHXPROD1',
    hostname => 'adb.us-phoenix-1.oraclecloud.com',
    port => '1522',
    service_name => 'xxxxx.adb.oraclecloud.com',
    ssl_server_cert_dn => 'CN=adwc.uscom-east-1.oraclecloud.com, OU=Oracle BMCS US, O=Oracle Corporation, L=Redwood City, ST=California, C=US',
    credential_name => 'CRED',
    directory_name => 'AQ_DBLINK_CREDENTIALS');
END;

In the GET_OBJECT call, the object_uri needs to be that URL that you just copied from the Pre-Authenticated Request.

In the CREATE_CREDENTIAL call, the username should be the user for the remote (PHXPROD1) database – we can just use ADMIN. Note that this must be in upper case since Autonomous Database is configured for case-sensitive login by default. The password should be the password for that user.

In the CREATE_DATABASE_LINK call, the db_link_name is what we are going to use to refer to the remote database. I just used the name of the database – you’ll see later why that makes things more intuitive. You can get the values for the hostname, port, service_name and ssl_server_cert_dn fields from the wallet you downloaded. Make sure you use the wallet for the PHXPROD1 database. You will find the right values in the tnsnames.ora file, and you can just copy them in here. Here’s an example, I’ve bolded the values we need:

phxprod1_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.us-phoenix-1.oraclecloud.com))(connect_data=(service_name=xxxx_phxprod1_high.adb.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com, OU=Oracle BMCS US, O=Oracle Corporation, L=Redwood City, ST=California, C=US")))

Once you have all the right values, paste this into the SQL Worksheet and click on the “Run Script” button:

You can check it worked by doing a query through the database link. For example, let’s get a list of the queues/topics on the remote database. We are entering this query on the ASHPROD1 instance, using the database link (“@PHXPROD1“) to have it run on the other database, notice that the output shows the topic PHX_TOPIC we created in the PHXPROD1 database:

Start message propagation

Ok, now we are ready to start propagating messages! (Yay!)

We want to run these commands in the SQL Worksheet on the ASHPROD1 database (the source/local database):

BEGIN
   dbms_aqadm.schedule_propagation(
      queue_name         => 'ash_topic', 
      destination        => 'phxprod1',
      destination_queue  => 'phx_topic');
   dbms_aqadm.enable_propagation_schedule(
      queue_name         => 'ash_topic',
      destination        => 'phxprod1',
      destination_queue  => 'phx_topic');
end;

You can view the schedule you just created with this query:

select destination, LAST_RUN_TIME, NEXT_RUN_TIME, LAST_ERROR_TIME, LAST_ERROR_MSG 
from dba_queue_schedules;

Start the consumer

Now we can start up our consumer! Back in our directory with our code (the one with the pom.xml in it) run this command to start the consumer:

export DB_PASSWORD=Welcome123##          <-- use your real password!
mvn clean compile exec:exec -P consumer

After a few moments, the consumer will start up and we will see this message indicating that it is connected and waiting for messages:

[INFO] 
[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ propagation ---
Waiting for messages...

So now, we need to send some messages to it!

Creating a Producer

Let’s create another Java file called src/main/java/com/wordpress/redstack/Producer.java with this content:

package com.wordpress.redstack;

import java.sql.SQLException;

import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;

import oracle.AQ.AQException;
import oracle.jms.AQjmsAgent;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.jms.AQjmsTextMessage;
import oracle.jms.AQjmsTopicPublisher;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class Producer {

    private static String username = "admin";
    private static String url = "jdbc:oracle:thin:@ashprod1_high?TNS_ADMIN=/home/mark/src/redstack/ASHPROD1";
    private static String topicName = "ash_topic";

    public static void main(String[] args) throws AQException, SQLException, JMSException {

        // create a topic session
        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setURL(url);
        ds.setUser(username);
        ds.setPassword(System.getenv("DB_PASSWORD"));

        TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
        TopicConnection conn = tcf.createTopicConnection();
        conn.start();
        TopicSession session = (AQjmsSession) 
           conn.createSession(true, Session.AUTO_ACKNOWLEDGE);

        // publish message
        Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
        AQjmsTopicPublisher publisher = (AQjmsTopicPublisher) session.createPublisher(topic);

        AQjmsTextMessage message = (AQjmsTextMessage) 
           session.createTextMessage("hello from ashburn, virginia!");
        publisher.publish(message, new AQjmsAgent[] { new AQjmsAgent("bob", null) });
        session.commit();

        // clean up
        publisher.close();
        session.close();
        conn.close();
    }

}

Let’s walk through this code. It’s very similar to the consumer, so I’ll just point out the important differences.

    private static String username = "admin";
    private static String url = "jdbc:oracle:thin:@ashprod1_high?TNS_ADMIN=/home/mark/src/redstack/ASHPROD1";
    private static String topicName = "ash_topic";

Notice that we are using the ASHPROD1 instance in the producer and the ASH_TOPIC.

        // publish message
        Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
        AQjmsTopicPublisher publisher = (AQjmsTopicPublisher) session.createPublisher(topic);

        AQjmsTextMessage message = (AQjmsTextMessage) 
           session.createTextMessage("hello from ashburn, virginia!");
        publisher.publish(message, new AQjmsAgent[] { new AQjmsAgent("bob", null) });
        session.commit();

We create a TopicProducer, and we are sending a simple JMS Text Message to the topic.

Let’s run our producer now:

export DB_PASSWORD=Welcome123##          <-- use your real password!
mvn clean compile exec:exec -P producer

When that finishes (you’ll see a “BUILD SECCESS” message) go and have a look at your consumer, you should see something like this:

[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ propagation ---
Waiting for messages...
hello from ashburn, virginia!

Yay! It worked! We just published a message on the ASH_TOPIC in the ASHPROD1 instance and it was propagated to PHXPROD1 for us and our consumer read it off the PHX_TOPIC in PHXPROD1.

Here’s an interesting query we can run to see what happened:

select queue, msg_id, msg_state, enq_timestamp, deq_timestamp, deq_user_id, user_data, consumer_name from aq$ash_topic;

You can also run that on the remote database like this:

select queue, msg_id, msg_state, enq_timestamp, deq_timestamp, deq_user_id, user_data, consumer_name
 from aq$phx_topic@phxprod1;

Notice the consumer names – in the local ASHPROD1 instance, the consumer is AQ$_P_106126_92PHXPROD1 (yours will probably be slightly different.) That’s the propagation consumer that is running to propagate the messages to PHXPROD1.

But in the PHXPROD1 instance, the consumer is BOOK! That’s the name we gave to our consumer:

        AQjmsTopicSubscriber subscriber = (AQjmsTopicSubscriber) 
           session.createDurableSubscriber(topic, "BOOK");

Go ahead and send some more messages with the producer! Enjoy!

]]>
https://redstack.dev/2022/05/24/cross-region-event-propagation-with-oracle-transactional-event-queues/feed/ 1 3527
Installing Oracle REST Data Services (standalone) https://redstack.dev/2022/05/17/installing-oracle-rest-data-services/ https://redstack.dev/2022/05/17/installing-oracle-rest-data-services/#respond <![CDATA[Mark Nelson]]> Tue, 17 May 2022 16:27:22 +0000 <![CDATA[Uncategorized]]> <![CDATA[ORDS]]> https://redstack.dev/?p=3466 <![CDATA[I have been using Oracle REST Data Services (you might know it as “Database Actions”) with my Oracle Autonomous Database for a while now, and I wanted to play with some new features, which led me to want to install … Continue reading ]]> <![CDATA[

I have been using Oracle REST Data Services (you might know it as “Database Actions”) with my Oracle Autonomous Database for a while now, and I wanted to play with some new features, which led me to want to install my own (“customer managed” or “standalone”) ORDS instance. It took me a few goes, and some help from Jeff Smith (yes, that Jeff Smith) to get it right, so I thought it would be good to document how I got it working!

In this example, I am going to use an Oracle 21c database, and I will set up ORDS 22.1 in one of the pluggable databases. Once we have it up and running, we will use Database Actions and look at some of the services in the REST service catalog.

Setting up a database

First, of course I needed a database to play with. I fired up a 21c database in a container for this exercise. You will need to go accept the license agreement before you can pull the container image.

Go to https://container-registry.oracle.com and navigate to “Database” and then “enterprise” and click on the button to accept the agreement. You may have to log in if you are not already.

You will also need to log in to Oracle Container Registry with your container runtime, in this post I am using Docker. This will prompt for your username and password:

docker login container-registry.oracle.com

Now we can start up a database in a container. Here is the command I used, I set a password and the SID/service names, and make sure to expose the database port so we can access the database from outside the container:

docker run -d \
  --name oracle-db \
  -p 1521:1521 \
  -e ORACLE_PWD=Welcome123## \
  -e ORACLE_SID=ORCL \
  -e ORACLE_PDB=PDB1 \
  container-registry.oracle.com/database/enterprise:21.3.0.0

Note: If you use different names, you will need to adjust the example commands appropriately! Also, if you want to be able to restart this database without losing all your data, you’ll want to mount a volume – the OCR page has details on how to do that.

It takes a few minutes to start up. You can watch the logs using this command, and you will need to wait until you see the message indicating it is ready to use:

docker logs -f oracle-db

When the database is ready, we can log on and give the necessary privileges to our PDB admin user.

sqlplus sys/Welcome123##@//localhost:1521/orcl as sysdba
SQL> alter session set container = PDB1;
SQL> grant dba to pdbadmin;

Ok, now we are ready to install ORDS!

Installing ORDS

First step is to download it, of course. Here is the site to get the latest version of ORDS:

https://www.oracle.com/database/technologies/appdev/rest-data-services-downloads.html

There is also a direct link to the latest version: https://download.oracle.com/otn_software/java/ords/ords-latest.zip

Once you have it downloaded, just unzip it into a new directory. I unzipped it into /home/mark/ords.

The steps that I am going to describe here are described in more detail in the documentation.

Now we want to run the pre-install script to set up the necessary privileges. I am using the pdbadmin user, the admin user in my PDB. This script will take just a few moments to run:

sqlplus sys/Welcome123##@//localhost:1521/pdb1 as sysdba \
        @scripts/installer/ords_installer_privileges.sql pdbadmin

Great, now we can run the installer. I used the interactive installer, which will ask you for the necessary information and let you type it in as you go. It is also possible to do a “silent” install by providing all of the information on the command line – the documentation explains how to do this.

Create a directory to hold the configuration and start the interactive installer:

cd /home/mark/ords
export PATH=/home/mark/ords/bin:$PATH
mkdir config
ords --config /home/mark/ords/config install

Here’s what the interactive install dialog looks like, I highlighted the data I entered in bold, mostly I just took the defaults:

Oracle REST Data Services - Interactive Install

  Enter a number to select the type of installation
    [1] Install or upgrade ORDS in the database only
    [2] Create or update a database pool and install/upgrade ORDS in the database
    [3] Create or update a database pool only
  Choose [2]:
  Enter a number to select the database connection type to use
    [1] Basic (host name, port, service name)
    [2] TNS (TNS alias, TNS directory)
    [3] Custom database URL
  Choose [1]:
  Enter the database host name [localhost]:
  Enter the database listen port [1521]:
  Enter the database service name [orcl]: pdb1

  Provide database user name with administrator privileges.
    Enter the administrator username: pdbadmin

  Enter the database password for pdbadmin:
Connecting to database user: pdbadmin url: jdbc:oracle:thin:@//localhost:1521/pdb1

Retrieving information.
  Enter the default tablespace for ORDS_METADATA and ORDS_PUBLIC_USER [SYSAUX]:
  Enter the temporary tablespace for ORDS_METADATA and ORDS_PUBLIC_USER [TEMP]:
  Enter a number to select additional feature(s) to enable:
    [1] Database Actions  (Enables all features)
    [2] REST Enabled SQL and Database API
    [3] REST Enabled SQL
    [4] Database API
    [5] None
  Choose [1]:
  Enter a number to configure and start ORDS in standalone mode
    [1] Configure and start ORDS in standalone mode
    [2] Skip
  Choose [1]:
  Enter a number to use HTTP or HTTPS protocol
    [1] HTTP
    [2] HTTPS
  Choose [1]:
  Enter the HTTP port [8080]:

Note: I just used HTTP, but if you want to use HTTPS, you will probably want to create some certificates and configure them in the installer. Here’s some commands to create a self-signed certificate and convert the key to the DER format ORDS requires:

# these are optional - only required if you want to use HTTPS

openssl req -new -x509 -sha256 -newkey rsa:2048 -nodes -keyout ords.key.pem \
            -days 365 -out ords.pem
openssl x509 -in ords.pem  -text -noout
openssl pkcs8 -topk8 -inform PEM -outform DER -in ords.pem -out ords.der -nocrypt
openssl pkcs8 -topk8 -inform PEM -outform DER -in ords.key.pem -out ords.der -nocrypt

Once you complete the interview, the installer will perform the installation. It takes just a couple of minutes, and it will start up the ORDS standalone server for you. If you need to stop it (with Ctrl-C) you can restart it with this command:

ords --config /home/mark/ords/config serve

Ok, now we have ORDS up and running, we are going to need a user!

Preparing an ORDS user

Let’s create a regular database user and give them access to ORDS.

Using the PDB admin user, we can create a new user and give them the necessary permissions to use ORDS:

sqlplus pdbadmin/Welcome123##@//localhost:1521/pdb1

SQL> create user mark identified by Welcome123##;
SQL> grant connect, resource to mark;
SQL> grant unlimited tablespace to mark;
SQL> begin
    ords.enable_schema(
        p_enabled => true,
        p_schema => 'mark',
        p_url_mapping_type => 'BASE_PATH',
        p_url_mapping_pattern => 'mark',
        p_auto_rest_auth => false
    );
    commit;
end;
/

Great, now we are ready to use ORDS!

Log in to ORDS Database Actions

To log into ORDS, open a browser and go to this URL: http://localhost:8080/ords/sql-developer

You may need to change the hostname or port if you used something different.

You should see the login page:

Enter your username – I used mark, and the press Next, and enter the password, I used Welcome123##. This will take you to the main “Database Actions” page:

Let’s create a table and enter some data 🙂

Click on the SQL card (top left) to open the SQL worksheet, and enter these statements:

create table city (
    name varchar2(64),
    population number
);

insert into city values ('Tokyo', 37000000);
insert into city values ('Dehli', 29000000);
insert into city values ('Shanghai', 26000000);
insert into city values ('Sao Paulo', 21000000);
insert into city values ('Values', 21000000);

Click on the “Run Script” icon to execute these statements – its the one the red arrow is highlighting:

Let’s expose that table as a REST service!

Creating a REST service

ORDS allows us to easily expose an SQL statement, or a PL/SQL block as a REST service. Let’s navigate to the REST page – click on the “hamburger menu” (1) and then the REST page (2):

The basic structure is that we create a “module” which contains “templates” which in turn contain “handlers.” So let’s start by creating a module. Click on the “Modules” option in the top menu.

Then click on the “Create Module” button in the top right corner:

Give the module a name, I used mark and a base path, I used /api/. Since we are just playing here, set the “Protected by Privilege” to Not protected. Obviously, in real life, you’d set up authentication, for example using OAuth, which ORDS provides out of the box – but’s that another post 🙂 Finally, click on the “Create” button to create the module. It will now appear in the modules page:

Click on the module name (“mark” in the example above) to open the module, and click on the “Create Template” button on the right hand side:

Enter a URI Template for this service, I used cities for mine, then click on the “Create” button:

Now you will see the template page. Click on the “Create Handler” button on the right:

In the “Create Handler” dialog, we provide details of the service. Notice that you can choose the HTTP Method (GET, POST, DELETE, etc.) and you can control paging. For this service, we want to create a GET handler and we want the “Source Type” to be Collection Query which lets us enter an SQL statement. While you are here, have a look in the pull down and notice that you can also use PL/SQL! You can also use bind variables in here, so we can accept parameters and use them in the query or PL/SQL code.

For now, enter this simple query, then click on the “Create” button:

select * from city

Note: You should not include the semi-colon!

Once you have created the handler, you will see the details view, where you can test it by clicking on the “Run” button (1):

Notice that you see the results in the bottom half of the screen. You also have the URL for the service provided (2) and there is a “copy” button on the right hand side. Let’s test our service using cURL:

$ curl http://localhost:8080/ords/mark/api/cities
{"items":[{"name":"Tokyo","population":37000000},{"name":"Dehli","population":29000000},{"name":"Shanghai","population":26000000},{"name":"Sao Paulo","population":21000000},{"name":"Values","population":21000000}],"hasMore":false,"limit":25,"offset":0,"count":5,"links":[{"rel":"self","href":"http://localhost:8080/ords/mark/api/cities"},{"rel":"describedby","href":"http://localhost:8080/ords/mark/metadata-catalog/api/item"},{"rel":"first","href":"http://localhost:8080/ords/mark/api/cities"}]}

Great! We made a service!

Explore the out of the box services

ORDS also provides a heap of out of the box services for us automatically. To explore these, let’s use Postman, which is a very popular tool for REST testing. You can download it from Postman.

Jeff Smith has a great post here that explains how to import all the ORDS REST APIs into Postman.

When you open Postman, click on the “Import” button:

Now you need the right URL! If you have been following along and using the same names as me your URL will be:

http://localhost:8080/ords/mark/_/db-api/latest/metadata-catalog/openapi.json

If you used a different user, you will need to change “mark” to your username in that URL. After you click on “Import” chose “Link” as the type and enter your URL:

One good thing about Postman is that we can set the authentication parameters at the top level, on that “ORDS Database API” folder that you just created. Open that and click on the “Authorization” tab, choose “Basic Auth” as the type and enter the database user and password:

In this folder you will see a whole collection of services for all kinds of things. Let’s try a simple one! Navigate to the “Get Database version” service and click on the “Send” button in the top right corner. You’ll see the result data in the bottom pane:

Well, there you go! We installed ORDS, used the Database Actions and REST interfaces, created a service and explored the out of the box services! I hope you enjoyed!

]]>
https://redstack.dev/2022/05/17/installing-oracle-rest-data-services/feed/ 0 3466
The OCI Service Mesh is now available! https://redstack.dev/2022/04/27/the-oci-service-mesh-is-now-available/ https://redstack.dev/2022/04/27/the-oci-service-mesh-is-now-available/#respond <![CDATA[Mark Nelson]]> Wed, 27 Apr 2022 19:41:28 +0000 <![CDATA[Uncategorized]]> <![CDATA[OCI]]> <![CDATA[Service Mesh]]> https://redstack.dev/?p=3457 <![CDATA[Dusko Vukmanovic just announced the general availability of OCI Service Mesh in this blog post. It provides security, observability, and network traffic management for cloud native applications without requiring any changes to the applications. Its a free managed service and its available in all … Continue reading ]]> <![CDATA[

Dusko Vukmanovic just announced the general availability of OCI Service Mesh in this blog post.

It provides securityobservability, and network traffic management for cloud native applications without requiring any changes to the applications.

Its a free managed service and its available in all commercial regions today. Check it out!

]]>
https://redstack.dev/2022/04/27/the-oci-service-mesh-is-now-available/feed/ 0 3457
Playing with Kafka Java Client for TEQ – creating the simplest of producers and consumers https://redstack.dev/2022/04/26/playing-with-okafka-creating-the-simplest-of-producers-and-consumers/ https://redstack.dev/2022/04/26/playing-with-okafka-creating-the-simplest-of-producers-and-consumers/#comments <![CDATA[Mark Nelson]]> Tue, 26 Apr 2022 19:33:08 +0000 <![CDATA[Uncategorized]]> <![CDATA[ADB]]> <![CDATA[kafka]]> <![CDATA[okafka]]> https://redstack.dev/?p=3418 <![CDATA[Today I was playing with Kafka Java Client for TEQ, that allows you to use Oracle Transactional Event Queues (formerly known as Sharded Queues) in the Oracle Database just like Kafka. Kafka Java Client for TEQ is available as a … Continue reading ]]> <![CDATA[

Today I was playing with Kafka Java Client for TEQ, that allows you to use Oracle Transactional Event Queues (formerly known as Sharded Queues) in the Oracle Database just like Kafka.

Kafka Java Client for TEQ is available as a preview in GitHub here: http://github.com/oracle/okafka

In this preview version, there are some limitations documented in the repository, but the main one to be aware of is that you need to use the okafka library, not the regular kafka one, so you would need to change existing kafka client code if you wanted to try out the preview.

Preparing the database

To get started, I grabbed a new Oracle Autonomous Database instance on Oracle Cloud, and I opened up the SQL Worksheet in Database Actions and created myself a user. As the ADMIN user, I ran the following commands:

create user mark identified by SomePassword;  -- that's not the real password!
grant connect, resource to mark;
grant create session to mark;
grant unlimited tablespace to mark;
grant execute on dbms_aqadm to mark;
grant execute on dbms_aqin to mark;
grant execute on dbms_aqjms to mark;
grant select_catalog_role to mark;
grant select on gv$instance to mark;
grant select on gv$listener_network to mark;
commit;

And of course, I needed a topic to work with, so I logged on to SQL Worksheet as my new MARK user and created a topic called topic1 with these commands:

begin
    sys.dbms_aqadm.create_sharded_queue(queue_name => 'topic1', multiple_consumers => TRUE); 
    sys.dbms_aqadm.set_queue_parameter('topic1', 'SHARD_NUM', 1);
    sys.dbms_aqadm.set_queue_parameter('topic1', 'STICKY_DEQUEUE', 1);
    sys.dbms_aqadm.set_queue_parameter('topic1', 'KEY_BASED_ENQUEUE', 1);
    sys.dbms_aqadm.start_queue('topic1');
end;

Note that this is for Oracle Database 19c. If you are using 21c, create_sharded_queue is renamed to create_transactional_event_queue, so you will have to update that line.

The topic is empty right now, since we just created it, but here are a couple of queries that will be useful later. We can see the messages in the topic, with details including the enqueue time, status, etc., using this query:

select * from topic1;

This is a useful query to see a count of messages in each status:

select msg_state, count(*)
from aq$topic1
group by msg_state;

Building the OKafka library

We need to build the OKafka library and install it in our local Maven repository so that it will be available to use as a dependency since the preview is not currently available in Maven Central.

First, clone the repository:

git clone https://github.com/oracle/okafka

Now we can build the uberjar with the included Gradle wrapper:

cd okafka
./gradlew fullJar

This will put the JAR file in gradle/build/libs and we can install this into our local Maven repository using this command:

mvn install:install-file \
    -DgroupId=org.oracle.okafka \
    -DartifactId=okafka \
    -Dversion=0.8 \
    -Dfile=clients/build/libs/okafka-0.8-full.jar \
    -DpomFile=clients/okafka-0.8.pom 

Now we are ready to start writing our code!

Creating the Producer

Let’s start by creating our Maven POM file. In a new directory, called okafka, I created a file called pom.xml with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" 
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
         https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.example</groupId>
	<artifactId>okafka</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>okafka</name>

	<properties>
		<java.version>17</java.version>
		<maven.compiler.source>17</maven.compiler.source>
		<maven.compiler.target>17</maven.compiler.target>
		<okafka.version>0.8</okafka.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>org.oracle.okafka</groupId>
			<artifactId>okafka</artifactId>
			<version>${okafka.version}</version>
		</dependency>
	</dependencies>
</project>

I am using Java 17 for this example. But you could use anything from 1.8 onwards, just update the version in the properties if you are using an earlier version.

Now let’s create our producer class:

mkdir -p src/main/java/com/example/okafka
touch src/main/java/com/example/okafka/Producer.java

Here’s the content for Producer.java:

package com.example.okafka;

import java.io.FileNotFoundException;
import java.io.InputStream;
import java.util.Properties;

import org.oracle.okafka.clients.producer.KafkaProducer;
import org.oracle.okafka.clients.producer.ProducerRecord;

public class Producer {

    private static final String propertiesFilename = "producer.properties";

    public static void main(String[] args) {
        // configure logging level
        System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", "INFO");

        // load props
        Properties props = getProperties();

        String topicName = props.getProperty("topic.name", "TOPIC1");

        try(KafkaProducer<String, String> producer = new KafkaProducer<>(props)) {
            for (int i = 0; i < 100; i++) {
                producer.send(new ProducerRecord<String, String>(
                    topicName, 0, "key", "value " + i));
            }
            System.out.println("sent 100 messages");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    private static Properties getProperties() {
        Properties props = new Properties();

        try (
                InputStream inputStream = Producer.class
                    .getClassLoader()
                    .getResourceAsStream(propertiesFilename);
        ) {
            if (inputStream != null) {
                props.load(inputStream);
            } else {
                throw new FileNotFoundException(
                     "could not find properties file: " + propertiesFilename);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
        return props;
    }
}

Let’s walk through this code and talk about what it does.

First, let’s notice the imports. We are importing the OKafka versions of the familiar Kafka classes. These have the same interfaces as the standard Kafka ones, but they work with Oracle TEQ instead:

import org.oracle.okafka.clients.producer.KafkaProducer;
import org.oracle.okafka.clients.producer.ProducerRecord;

In the main() method we first set the log level and then we load some properties from our producer.properties config file. You will see the getProperties() method at the end of the file is a fairly standard, it is just reading the file and returning the contents as a new Properties object.

Let’s see what’s in that producer.properties file, which is located in the src/main/resources directory:

oracle.service.name=xxxxx_prod_high.adb.oraclecloud.com
oracle.instance.name=prod_high
oracle.net.tns_admin=/home/mark/src/okafka/wallet
security.protocol=SSL
tns.alias=prod_high

bootstrap.servers=adb.us-ashburn-1.oraclecloud.com:1522
batch.size=200
linger.ms=100
buffer.memory=326760
key.serializer=org.oracle.okafka.common.serialization.StringSerializer
value.serializer=org.oracle.okafka.common.serialization.StringSerializer
topic.name=TOPIC1

There are two groups of properties in there. The first group provide details about my Oracle Autonomous Database instance, including the location of the wallet file – we’ll get that and set it up in a moment.

The second group are the normal Kafka properties that you might expect to see, assuming you are familiar with Kafka. Notice that the bootstrap.servers lists the address of my Oracle Autonomous Database, not a Kafka broker! Also notice that we are using the serializers (and later, deserializers) provided in the OKafka library, not the standard Kafka ones.

Next, we set the topic name by reading it from the properties file. If it is not there, the second argument provides a default/fallback value:

String topicName = props.getProperty("topic.name", "TOPIC1");

And now we are ready to create the producer and send some messages:

try(KafkaProducer<String, String> producer = new KafkaProducer<>(props)) {
    for (int i = 0; i < 100; i++) {
        producer.send(new ProducerRecord<String, String>(
            topicName, 0, "key", "value " + i));
    }
    System.out.println("sent 100 messages");
} catch (Exception e) {
    e.printStackTrace();
}

We created the KafkaProducer and for this example, we are using String for both the key and the value.

We have a loop to send 100 messages, which we create with the ProducerRecord class. We are just setting them to some placeholder data.

Ok, that’s all we need in the code. But we will need to get the wallet and set it up so Java programs can use it to authenticate. Have a look at this post for details on how to do that! You just need to download the wallet from the OCI console, unzip it into a directory called wallet – put that in the same directory as the pom.xml, and then edit the sqlnet.ora to set the DIRECTORY to the right location, e.g. /home/mark/src/okafka/wallet for me, and then add your credentials using the setup_wallet.sh I showed in that post.

Finally, you need to add these lines to the ojdbc.properties file in the wallet directory to tell OKafka the user to connect to the database with:

user=mark
password=SomePassword
oracle.net.ssl_server_dn_match=true

With the wallet set up, we are ready to build and run our code!

mvn clean package
CLASSPATH=target/okafka-0.0.1-SNAPSHOT.jar
CLASSPATH=$CLASSPATH:$HOME/.m2/repository/org/oracle/okafka/okafka/0.8/okafka-0.8.jar
java -classpath $CLASSAPTH com.example.okafka.Producer

The output should look like this:

[main] INFO org.oracle.okafka.clients.producer.ProducerConfig - ProducerConfig values: 
        acks = 1
        batch.size = 200
        bootstrap.servers = [adb.us-ashburn-1.oraclecloud.com:1522]
        buffer.memory = 326760
        client.id = 
        compression.type = none
        connections.max.idle.ms = 540000
        enable.idempotence = false
        interceptor.classes = []
        key.serializer = class org.oracle.okafka.common.serialization.StringSerializer
        linger.ms = 100
        max.block.ms = 60000
        max.in.flight.requests.per.connection = 5
        max.request.size = 1048576
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        oracle.instance.name = prod_high
        oracle.net.tns_admin = /home/mark/src/okafka/wallet
        oracle.service.name = xxxxx_prod_high.adb.oraclecloud.com
        partitioner.class = class org.oracle.okafka.clients.producer.internals.DefaultPartitioner
        receive.buffer.bytes = 32768
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retries = 0
        retry.backoff.ms = 100
        security.protocol = SSL
        send.buffer.bytes = 131072
        tns.alias = prod_high
        transaction.timeout.ms = 60000
        transactional.id = null
        value.serializer = class org.oracle.okafka.common.serialization.StringSerializer

[main] WARN org.oracle.okafka.common.utils.AppInfoParser - Error while loading kafka-version.properties :inStream parameter is null
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka version : unknown
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka commitId : unknown
[kafka-producer-network-thread | producer-1] INFO org.oracle.okafka.clients.Metadata - Cluster ID: 
sent 100 messages
[main] INFO org.oracle.okafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.

You can see it dumps out the properties, and then after some informational messages you see the “sent 100 messages” output. Now you might want to go and run that query to look at the messages in the database!

Now, lets move on to creating a consumer, so we can read those messages back.

Creating the Consumer

The consumer is going to look very similar to the producer, and it will also have its own properties file. Here’s the contents of the properties file first – put this in src/main/resources/consumer.properties:

oracle.service.name=xxxxx_prod_high.adb.oraclecloud.com
oracle.instance.name=prod_high
oracle.net.tns_admin=/home/mark/src/testing-okafka/okafka/wallet
security.protocol=SSL
tns.alias=prod_high

bootstrap.servers=adb.us-ashburn-1.oraclecloud.com:1522
group.id=bob
enable.auto.commit=true
auto.commit.interval.ms=10000
key.deserializer=org.oracle.okafka.common.serialization.StringDeserializer
value.deserializer=org.oracle.okafka.common.serialization.StringDeserializer
max.poll.records=100

And here is the content for Consumer.java which you create in src/main/java/com/example/okafka:

package com.example.okafka;

import java.io.FileNotFoundException;
import java.io.InputStream;
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;

import org.oracle.okafka.clients.consumer.ConsumerRecord;
import org.oracle.okafka.clients.consumer.ConsumerRecords;
import org.oracle.okafka.clients.consumer.KafkaConsumer;

public class Consumer {
    private static final String propertiesFilename = "consumer.properties";

    public static void main(String[] args) {
        // logging level
        System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", "INFO");


        // load props
        Properties props = getProperties();

        String topicName = props.getProperty("topic.name", "TOPIC1");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList(topicName));

        ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(30_000));
        for (ConsumerRecord<String, String> record : records) {
            System.out.println(
                record.topic() + " " + 
                record.partition() + " " + 
                record.key() + " " + 
                record.value());
        }
        consumer.close();
    }

    private static Properties getProperties() {
        Properties props = new Properties();

        try (
                InputStream inputStream = Producer.class
                    .getClassLoader()
                    .getResourceAsStream(propertiesFilename);
        ) {
            if (inputStream != null) {
                props.load(inputStream);
            } else {
                throw new FileNotFoundException(
                    "could not find properties file: " + propertiesFilename);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
        return props;
    }
}

A lot of this is the same as the producer, so let’s walk through the parts that are different.

First, we load a the different properties file, the consumer one, it has a few different properties that are relevant for consumers. In particular, we are setting the max.poll.records to 100 – so we’ll only be reading at most 100 messages off the topic at a time.

Here’s how we create the consumer:

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList(topicName));

Again, you may notice that this is very similar to Kafka. We are using String as the type for both the key and value. Notice we provided the appropriate deserializers in the property file, the ones from the OKafka library, not the standard Kafka ones.

Here’s the actual consumer code:

        ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(30_000));
        for (ConsumerRecord<String, String> record : records) {
            System.out.println(
                record.topic() + " " + 
                record.partition() + " " + 
                record.key() + " " + 
                record.value());
        }
        consumer.close();

We open our consumer and poll for messages (for 30 seconds) and then we just print out some information about each message, and then close out consumer! Again, this is very simple, but its enough to test consuming messages.

We can run this and we should see all of the message data in the output, here’s how to run it, and an excerpt of the output:

mvn clean package
CLASSPATH=target/okafka-0.0.1-SNAPSHOT.jar
CLASSPATH=$CLASSPATH:$HOME/.m2/repository/org/oracle/okafka/okafka/0.8/okafka-0.8.jar
java -classpath $CLASSAPTH com.example.okafka.Consumer

[main] INFO org.oracle.okafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 
        auto.commit.interval.ms = 10000
        auto.offset.reset = latest
        bootstrap.servers = [adb.us-ashburn-1.oraclecloud.com:1522]
        check.crcs = true
        client.id = 
        connections.max.idle.ms = 540000
        default.api.timeout.ms = 60000
        enable.auto.commit = true
        exclude.internal.topics = true
        fetch.max.bytes = 52428800
        fetch.max.wait.ms = 500
        fetch.min.bytes = 1
        group.id = bob
        heartbeat.interval.ms = 3000
        interceptor.classes = []
        internal.leave.group.on.close = true
        isolation.level = read_uncommitted
        key.deserializer = class org.oracle.okafka.common.serialization.StringDeserializer
        max.partition.fetch.bytes = 1048576
        max.poll.interval.ms = 300000
        max.poll.records = 100
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        oracle.instance.name = prod_high
        oracle.net.tns_admin = /home/mark/src/okafka/wallet
        oracle.service.name = xxxxx_prod_high.adb.oraclecloud.com
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retry.backoff.ms = 100
        security.protocol = SSL
        send.buffer.bytes = 131072
        session.timeout.ms = 10000
        tns.alias = prod_high
        value.deserializer = class org.oracle.okafka.common.serialization.StringDeserializer

[main] WARN org.oracle.okafka.common.utils.AppInfoParser - Error while loading kafka-version.properties :inStream parameter is null
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka version : unknown
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka commitId : unknown
TOPIC1 0 key value 0
TOPIC1 0 key value 1
TOPIC1 0 key value 2
...

So there you go! We successfully created a very simple producer and consumer and we sent and received messages from a topic using the OKafka library and Oracle Transactional Event Queues!

]]>
https://redstack.dev/2022/04/26/playing-with-okafka-creating-the-simplest-of-producers-and-consumers/feed/ 1 3418
Loading data into Autonomous Data Warehouse using Datapump https://redstack.dev/2022/04/12/loading-data-into-autonomous-data-warehouse-using-datapump/ https://redstack.dev/2022/04/12/loading-data-into-autonomous-data-warehouse-using-datapump/#respond <![CDATA[Mark Nelson]]> Tue, 12 Apr 2022 19:47:51 +0000 <![CDATA[Uncategorized]]> https://redstack.dev/?p=3415 <![CDATA[Today I needed to load some data in my Oracle Autonomous Database running on Oracle Cloud (OCI). I found this great article that explained just what I needed! Thanks to Ankur Saini for sharing!]]> <![CDATA[

Today I needed to load some data in my Oracle Autonomous Database running on Oracle Cloud (OCI). I found this great article that explained just what I needed!

Thanks to Ankur Saini for sharing!

]]>
https://redstack.dev/2022/04/12/loading-data-into-autonomous-data-warehouse-using-datapump/feed/ 0 3415
Configuring a Java application to connect to Autonomous Database using Mutual TLS https://redstack.dev/2022/04/11/configuring-a-java-application-to-connect-to-autonomous-database-using-mutual-tls/ https://redstack.dev/2022/04/11/configuring-a-java-application-to-connect-to-autonomous-database-using-mutual-tls/#comments <![CDATA[Mark Nelson]]> Mon, 11 Apr 2022 15:21:41 +0000 <![CDATA[Uncategorized]]> <![CDATA[ADB]]> <![CDATA[Java]]> <![CDATA[MTLS]]> https://redstack.dev/?p=3388 <![CDATA[In this post, I am going to explain how to configure a standalone Java (SE) application to connect to an Oracle Autonomous Database instance running in Oracle Cloud using Mutual TLS. The first thing you are going to need is … Continue reading ]]> <![CDATA[

In this post, I am going to explain how to configure a standalone Java (SE) application to connect to an Oracle Autonomous Database instance running in Oracle Cloud using Mutual TLS.

The first thing you are going to need is an Oracle Autonomous Database instance. If you are reading this post, you probably already know how to get one. But just in case you don’t – here’s a good reference to get you started – and remember, this is available in the “always free” tier, so you can try this out for free!

When you look at your instance in the Oracle Cloud (OCI) console, you will see there is a button labelled DB Connection – go ahead and click on that:

Viewing the Autonomous Database instance in the Oracle Cloud Console.

In the slide out details page, there is a button labelled Download wallet – click on that and save the file somewhere convenient.

Downloading the wallet.

When you unzip the wallet file, you will see it contains a number of files, as shown below, including a tnsnames.ora and sqlnet.ora to tell your client how to access the database server, as well as some wallet files that contain certificates to authenticate to the database:

$ ls
Wallet_MYQUICKSTART.zip

$ unzip Wallet_MYQUICKSTART.zip
Archive:  Wallet_MYQUICKSTART.zip
  inflating: README
  inflating: cwallet.sso
  inflating: tnsnames.ora
  inflating: truststore.jks
  inflating: ojdbc.properties
  inflating: sqlnet.ora
  inflating: ewallet.p12
  inflating: keystore.jks

The first thing you need to do is edit the sqlnet.ora file and make sure the DIRECTORY entry matches the location where you unzipped the wallet, and then add the SSL_SERVER_DN_MATCH=yes option to the file, it should look something like this:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/home/mark/blog")))
SSL_SERVER_DN_MATCH=yes

Before we set up Mutual TLS – let’s review how we can use this wallet as-is to connect to the database using a username and password. Let’s take a look at a simple Java application that we can use to validate connectivity – you can grab the source code from GitHub:

$ git clone https://github.com/markxnelson/adb-mtls-sample

This repository contains a very simple, single class Java application that just connects to the database, checks that the connection was successful and then exits. It includes a Maven POM file to get the dependencies and to run the application.

Make sure you can compile the application successfully:

$ cd adb-mtls-sample
$ mvn clean compile

Before you run the sample, you will need to edit the Java class file to set the database JDBC URL and user to match your own environment. Notice these lines in the file src/main/java/com/github/markxnelson/SimpleJDBCTest.java:

// set the database JDBC URL - note that the alias ("myquickstart_high" in this example) and 
// the location of the wallet must be changed to match your own environment
private static String url = "jdbc:oracle:thin:@myquickstart_high?TNS_ADMIN=/home/mark/blog";
    
// the username to connect to the database with
private static String username = "admin";

You need to update these with the correct alias name for your database (it is defined in the tnsnames.ora file in the wallet you downloaded) and the location of the wallet, i.e. the directory where you unzipped the wallet, the same directory where the tnsnames.ora is located.

You also need to set the correct username that the sample should use to connect to your database. Note that the user must exist and have at least the connect privilege in the database.

Once you have made these updates, you can compile and run the sample. Note that this code expects you to provide the password for that use in an environment variable called DB_PASSWORD:

$ export DB_PASSWORD=whatever_it_is
$ mvn clean compile exec:exec

You will see the output from Maven, and toward the end, something like this:

[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ adb-mtls-sample ---
Trying to connect...
Connected!
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

Great! We can connect to the database normally, using a username and password. If you want to be sure, try commenting out the two lines that set the user and password on the data source and run this again – the connection will fail and you will get an error!

Now let’s configure it to use mutual TLS instead.

I included a script called setup_wallet.sh in the sample repository. If you prefer, you can just run that script and provide the username and passwords when asked. If you want to do it manually, then read on!

First, we need to configure the Java class path to include the Oracle Wallet JAR files. Maven will have downloaded these from Maven Central for you when you compiled the application above, so you can find them in your local Maven repository:

  • $HOME/.m2/repository/com/oracle/database/security/oraclepki/19.3.0.0/oraclepki-19.3.0.0.jar
  • $HOME/.m2/repository/com/oracle/database/security/osdt_core/19.3.0.0/osdt_core-19.3.0.0.jar
  • $HOME/.m2/repository/com/oracle/database/security/osdt_cert/19.3.0.0/osdt_cert-19.3.0.0.jar

You’ll need these for the command we run below – you can put them into an environment variable called CLASSPATH for easy access.

export CLASSPATH=$HOME/.m2/repository/com/oracle/database/security/oraclepki/19.3.0.0/oraclepki-19.3.0.0.jar
export CLASSPATH=$CLASSPATH:$HOME/.m2/repository/com/oracle/database/security/osdt_core/19.3.0.0/osdt_core-19.3.0.0.jar
export CLASSPATH=$CLASSPATH:$HOME/.m2/repository/com/oracle/database/security/osdt_cert/19.3.0.0/osdt_cert-19.3.0.0.jar

Here’s the command you will need to run to add your credentials to the wallet (don’t run it yet!):

java \
    -Doracle.pki.debug=true \
    -classpath ${CLASSPATH} \
    oracle.security.pki.OracleSecretStoreTextUI \
    -nologo \
    -wrl "$USER_DEFINED_WALLET" \
    -createCredential "myquickstart_high" \
    $USER >/dev/null <<EOF
$DB_PASSWORD
$DB_PASSWORD
$WALLET_PASSWORD
EOF

First, set the environment variable USER_DEFINED_WALLET to the directory where you unzipped the wallet, i.e. the directory where the tnsnames.ora is located.

export USER_DEFINED_WALLET=/home/mark/blog

You’ll also want the change the alias in this command to match your database alias. In the example above it is myquickstart_high. You get this value from your tnsnames.ora – its the same one you used in the Java code earlier.

Now we are ready to run the command. This will update the wallet to add your user’s credentials and associate them with that database alias.

Once we have done that, we can edit the Java source code to comment out (or remove) the two lines that set the user and password:

//ds.setUser(username);
//ds.setPassword(password);

Now you can compile and run the program again, and this time it will get the credentials from the wallet and will use mutual TLS to connect to the database.

$ mvn clean compile exec:exec
... (lines omitted) ...
[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ adb-mtls-sample ---
Trying to connect...
Connected!
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

There you have it! We can now use this wallet to allow Java applications to connect to our database securely. This example we used was pretty simple, but you could imagine perhaps putting this wallet into a Kubernetes secret and mounting that secret as a volume for a pod running a Java microservice. This provides separation of the code from the credentials and certificates needed to connect to and validate the database, and helps us to build more secure microservices. Enjoy!

]]>
https://redstack.dev/2022/04/11/configuring-a-java-application-to-connect-to-autonomous-database-using-mutual-tls/feed/ 1 3388
Can Java microservices be as fast as Go? https://redstack.dev/2020/11/18/can-java-microservices-be-as-fast-as-go/ https://redstack.dev/2020/11/18/can-java-microservices-be-as-fast-as-go/#respond <![CDATA[Mark Nelson]]> Wed, 18 Nov 2020 15:30:14 +0000 <![CDATA[Uncategorized]]> https://redstack.dev/?p=3378 <![CDATA[I recently did a talk with Peter Nagy where we compared Java and Go microservices performance. We published a write up in the Helidon blog over at Medium.]]> <![CDATA[

I recently did a talk with Peter Nagy where we compared Java and Go microservices performance. We published a write up in the Helidon blog over at Medium.

]]>
https://redstack.dev/2020/11/18/can-java-microservices-be-as-fast-as-go/feed/ 0 3378
Storing ATP Wallets in a Kubernetes Secret https://redstack.dev/2020/11/18/storing-atp-wallets-in-a-kubernetes-secret/ https://redstack.dev/2020/11/18/storing-atp-wallets-in-a-kubernetes-secret/#respond <![CDATA[Mark Nelson]]> Wed, 18 Nov 2020 14:35:58 +0000 <![CDATA[Uncategorized]]> https://redstack.dev/?p=3366 <![CDATA[In this previous post, we talked about how to create a WebLogic datasource for an ATP database. In that example we put the ATP wallet into the domain directly, which is fine if your domain is on a secure environment, but … Continue reading ]]> <![CDATA[

In this previous post, we talked about how to create a WebLogic datasource for an ATP database. In that example we put the ATP wallet into the domain directly, which is fine if your domain is on a secure environment, but if we want to use ATP from a WebLogic domain running in Kubernetes, you might not want to burn the wallet into the Docker image. Doing so would enable anyone with access to the Docker image to retrieve the wallet.

A more reasonable thing to do in the Kubernetes environment would be to put the ATP wallet into a Kubernetes secret and mount that secret into the container.

You will, of course need to decide where you are going to mount it and update the sqlnet.ora with the right path, like we did in the previous post. Once that is taken care of, you can create the secret from the wallet using a small script like this:

#!/bin/bash
# Copyright 2019, Oracle Corporation and/or its affiliates. All rights reserved.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: atp-secret
  namespace: default
type: Opaque
data:
  ojdbc.properties: `cat ojdbc.properties | base64 -w0`
  tnsnames.ora: `cat tnsnames.ora | base64 -w0`
  sqlnet.ora: `cat sqlnet.ora | base64 -w0`
  cwallet.sso: `cat cwallet.sso | base64 -w0`
  ewallet.p12: `cat ewallet.p12 | base64 -w0`
  keystore.jks: `cat keystore.jks | base64 -w0`
  truststore.jks: `cat truststore.jks | base64 -w0`
EOF

We need to base64 encode the data that we put into the secret. When you mount the secret on a container (in a pod), Kubernetes will decode it, so it appears to the container in its original form.

Here is an example of how to mount the secret in a container:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-weblogic-server
  labels:
    app: my-weblogic-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-weblogic-server
  template:
    metadata:
      labels:
        app: my-weblogic-server
    spec:
      containers:
      - name: my-weblogic-server
        image: my-weblogic-server:1.2
        volumeMounts:
        - mountPath: /shared
          name: atp-secret
          readOnly: true
      volumes:
       - name: atp-secret
         secret:
           defaultMode: 420
           secretName: atp-secret

You will obviously still need to control access to the secret and the running containers, but overall this approach does help to provide a better security stance.

]]>
https://redstack.dev/2020/11/18/storing-atp-wallets-in-a-kubernetes-secret/feed/ 0 3366
Configuring a WebLogic Data Source to use ATP https://redstack.dev/2020/11/18/configuring-a-weblogic-data-source-to-use-atp/ https://redstack.dev/2020/11/18/configuring-a-weblogic-data-source-to-use-atp/#comments <![CDATA[Mark Nelson]]> Wed, 18 Nov 2020 14:34:46 +0000 <![CDATA[Uncategorized]]> https://redstack.dev/?p=3364 <![CDATA[In this post I am going to share details about how to configure a WebLogic data source to use ATP. If you are not familiar with ATP, it is the new Autonomous Transaction Processing service on Oracle Cloud. It provides a fully … Continue reading ]]> <![CDATA[

In this post I am going to share details about how to configure a WebLogic data source to use ATP.

If you are not familiar with ATP, it is the new Autonomous Transaction Processing service on Oracle Cloud. It provides a fully managed autonomous database. You can create a new database in the OCI console in the Database menu under “Autonomous Transaction Processing” by clicking on that big blue button:

You need to give it a name, choose the number of cores and set an admin password:

It will take a few minutes to provision the database. Once it is ready, click on the database to view details.

Then click on the “DB Connection” button to download the wallet that we will need to connect to the database.

You need to provide a password for the wallet, and then you can download it:

Copy the wallet to your WebLogic server and unzip it. You will see the following files:

[oracle@domain1-admin-server atp]$ ls -l
total 40
-rw-rw-r--. 1 oracle oracle 6661 Feb  4 17:40 cwallet.sso
-rw-rw-r--. 1 oracle oracle 6616 Feb  4 17:40 ewallet.p12
-rw-rw-r--. 1 oracle oracle 3241 Feb  4 17:40 keystore.jks
-rw-rw-r--. 1 oracle oracle   87 Feb  4 17:40 ojdbc.properties
-rw-rw-r--. 1 oracle oracle  114 Feb  4 17:40 sqlnet.ora
-rw-rw-r--. 1 oracle oracle 6409 Feb  4 17:40 tnsnames.ora
-rw-rw-r--. 1 oracle oracle 3336 Feb  4 17:40 truststore.jks

I put these in a directory called /shared/atp. You need to update the sqlnet.ora to have the correct location as shown below:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/shared/atp")))
SSL_SERVER_DN_MATCH=yes

You will need to grab the hostname, port and service name from the tnsnames.ora to create the data source, here is an example:

productiondb_high = (description= (address=(protocol=tcps)(port=1522)(host=adb.us-phoenix-1.oraclecloud.com))(connect_data=(service_name=feqamosccwtl3ac_productiondb_high.atp.oraclecloud.com))(security=(ssl_server_cert_dn=
        "CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US"))   )

You can now log in to the WebLogic console and create a data source, give it a name on the first page:

You can take the defaults on the second page:

And the third:

On the next page, you need set the database name, hostname and port to the values from the tnsnames.ora:

On the next page you can provide the username and password. In this example I am just using the admin user. In a real life scenario you would probably go and create a “normal” user and use that. You can find details about how to set up SQLPLUS here.

You also need to set up a set of properties that are required for ATP as shown below, you can find more details in the ATP documentation:

oracle.net.tns_admin=/shared/atp
oracle.net.ssl_version=1.2
javax.net.ssl.trustStore=/shared/atp/truststore.jks
oracle.net.ssl_server_dn_match=true
user=admin
javax.net.ssl.keyStoreType=JKS
javax.net.ssl.trustStoreType=JKS
javax.net.ssl.keyStore=/shared/atp/keystore.jks
javax.net.ssl.keyStorePassword=WebLogicCafe1
javax.net.ssl.trustStorePassword=WebLogicCafe1
oracle.jdbc.fanEnabled=false

Also notice the the URL format is jdbc:oracle:thin:@cafedatabase_high, you just need to put the name in there from the tnsnames.ora file:

On the next page you can target the data source to the appropriate servers, and we are done! Click on the “Finish” button and then you can activate changes if you are in production mode.

You can now go and test the data source (in the “Monitoring” tab and then “Testing”, select the data source and click on the “Test Data Source” button.

You will see the success message:

Enjoy!

]]>
https://redstack.dev/2020/11/18/configuring-a-weblogic-data-source-to-use-atp/feed/ 1 3364
New Steps Store launched in Wercker! https://redstack.dev/2018/04/05/new-steps-store-launched-in-wercker/ https://redstack.dev/2018/04/05/new-steps-store-launched-in-wercker/#respond <![CDATA[Mark Nelson]]> Thu, 05 Apr 2018 12:16:53 +0000 <![CDATA[Uncategorized]]> <![CDATA[CI/CD]]> <![CDATA[steps]]> <![CDATA[wercker]]> https://redstack.dev/?p=3358 <![CDATA[Wercker’s new Steps Store just went live and you can read all about it here: http://blog.wercker.com/steps-launch-of-new-steps-store In case you don’t know – Wercker is Oracle’s cloud-based (SaaS) CI/CD platform, which you can use for free at http://www.wercker.com.  Steps are reusable … Continue reading ]]> <![CDATA[

Wercker’s new Steps Store just went live and you can read all about it here:

http://blog.wercker.com/steps-launch-of-new-steps-store

In case you don’t know – Wercker is Oracle’s cloud-based (SaaS) CI/CD platform, which you can use for free at http://www.wercker.com.  Steps are reusable parts that can be used in continuous delivery pipelines.  They are almost all open source and free to use too.  We also have a non-free tier which we call “Oracle Container Pipelines” which gives you dedicated resources to run your pipelines.

]]> https://redstack.dev/2018/04/05/new-steps-store-launched-in-wercker/feed/ 0 3358 Oracle releases the open source Oracle WebLogic Server Kubernetes Operator https://redstack.dev/2018/02/06/oracle-releases-the-open-source-oracle-weblogic-server-kubernetes-operator/ https://redstack.dev/2018/02/06/oracle-releases-the-open-source-oracle-weblogic-server-kubernetes-operator/#respond <![CDATA[Mark Nelson]]> Tue, 06 Feb 2018 21:57:54 +0000 <![CDATA[Uncategorized]]> https://redstack.dev/?p=3356 <![CDATA[I am very happy to be able to announce that we have just released and open sourced the Oracle WebLogic Server Kubernetes Operator, which I have been working on with a great team of people for the last few months! … Continue reading ]]> <![CDATA[

I am very happy to be able to announce that we have just released and open sourced the Oracle WebLogic Server Kubernetes Operator, which I have been working on with a great team of people for the last few months!

You can find the official announcement on the WebLogic Server blog and the code is on GitHub at https://github.com/oracle/weblogic-kubernetes-operator.  This initial release is a “Technology Preview” which we really hope people will be interested in playing with and giving feedback.  We have already had some great feedback from our small group of testers who have been playing with it for the last couple of weeks, and we are very, very appreciative for their input.  We have some great plans for the operator going forward.

 

 

]]>
https://redstack.dev/2018/02/06/oracle-releases-the-open-source-oracle-weblogic-server-kubernetes-operator/feed/ 0 3356
Oracle releases certification for WebLogic Server on Kubernetes https://redstack.dev/2018/01/16/oracle-releases-certification-for-weblogic-server-on-kubernetes/ https://redstack.dev/2018/01/16/oracle-releases-certification-for-weblogic-server-on-kubernetes/#respond <![CDATA[Mark Nelson]]> Wed, 17 Jan 2018 00:08:06 +0000 <![CDATA[Uncategorized]]> https://redstack.dev/?p=3354 <![CDATA[In case you missed it, Oracle has certified WebLogic Server on Kubernetes.  You can read all the details here: https://blogs.oracle.com/weblogicserver/weblogic-server-certification-on-kubernetes]]> <![CDATA[

In case you missed it, Oracle has certified WebLogic Server on Kubernetes.  You can read all the details here:

https://blogs.oracle.com/weblogicserver/weblogic-server-certification-on-kubernetes

]]>
https://redstack.dev/2018/01/16/oracle-releases-certification-for-weblogic-server-on-kubernetes/feed/ 0 3354
Java EE is moving to the Eclipse Foundation https://redstack.dev/2017/09/25/java-ee-is-moving-to-the-eclipse-foundation/ https://redstack.dev/2017/09/25/java-ee-is-moving-to-the-eclipse-foundation/#respond <![CDATA[Mark Nelson]]> Tue, 26 Sep 2017 01:01:13 +0000 <![CDATA[Uncategorized]]> https://redstack.dev/?p=3350 <![CDATA[I’m sure many of you have already heard the news, but in case you missed it, you might want to read all about it here!]]> <![CDATA[

I’m sure many of you have already heard the news, but in case you missed it, you might want to read all about it here!

]]>
https://redstack.dev/2017/09/25/java-ee-is-moving-to-the-eclipse-foundation/feed/ 0 3350
Java SE 9 and Java EE 8 released https://redstack.dev/2017/09/21/java-se-9-and-java-ee-8-released/ https://redstack.dev/2017/09/21/java-se-9-and-java-ee-8-released/#respond <![CDATA[Mark Nelson]]> Thu, 21 Sep 2017 22:33:25 +0000 <![CDATA[Uncategorized]]> https://redstack.dev/2017/09/21/java-se-9-and-java-ee-8-released/ <![CDATA[“Oracle today announced the general availability of Java SE 9 (JDK 9), Java Platform Enterprise Edition 8 (Java EE 8) and the Java EE 8 Software Development Kit (SDK). “ You can read the Oracle Press Release here: “Oracle Announces … Continue reading ]]> <![CDATA[

“Oracle today announced the general availability of Java SE 9 (JDK 9), Java Platform Enterprise Edition 8 (Java EE 8) and the Java EE 8 Software Development Kit (SDK). “

You can read the Oracle Press Release here: “Oracle Announces Java SE 9 and Java EE 8

 

]]>
https://redstack.dev/2017/09/21/java-se-9-and-java-ee-8-released/feed/ 0 3349
Oracle joins Cloud Native Computing Foundation https://redstack.dev/2017/09/19/oracle-joins-cloud-native-computing-foundation/ https://redstack.dev/2017/09/19/oracle-joins-cloud-native-computing-foundation/#respond <![CDATA[Mark Nelson]]> Tue, 19 Sep 2017 11:48:13 +0000 <![CDATA[Uncategorized]]> https://redstack.dev/2017/09/19/oracle-joins-cloud-native-computing-foundation/ <![CDATA[Read about it over here: https://blogs.oracle.com/developers/oracle-joins-cncf-doubles-down-further-on-kubernetes]]> <![CDATA[

Read about it over here: https://blogs.oracle.com/developers/oracle-joins-cncf-doubles-down-further-on-kubernetes

]]>
https://redstack.dev/2017/09/19/oracle-joins-cloud-native-computing-foundation/feed/ 0 3348