From Local SQLite to Distributed Java RMI
Turning the EMP console program into a distributed service with manual transactions, Docker deployment, and concurrency experiments.
One-sentence summary
I upgraded a local employee database program into a distributed Java RMI system, then added explicit transactions and concurrent client experiments to observe SQLite behavior.
What the assignment required
Run the original EMP program
Understand the existing SQLite-backed console workflow.
Convert it to Java RMI
Move EMP operations behind a remote object on the server.
Enable manual transactions
Replace default auto-commit with explicit commit and rollback.
Run concurrent transaction experiments
Observe what happens when multiple clients read and write together.
Original program: local and monolithic
The user enters commands in a single Java program.
The same process opens SQLite connections and runs SQL directly.
Everything happens on one machine, so there is no distributed boundary.
Good for
- Learning the schema and existing CRUD workflow
- Single-user local execution
- Quick verification of the EMP table
Not enough for this assignment
- No remote interface
- No server-side ownership of the database
- No clean place to control transactions for many clients
Distributed design with RMI
Calls remote methods such as list, find, or update.
Fixed ports make Docker networking predictable.
All SQL, transactions, and validation stay on the server side.
The database file is only accessed inside the server container.
Why the database belongs on the server
- The client should not know JDBC or SQLite details.
- Transaction logic stays centralized and consistent.
- Concurrency experiments become easier to control and observe.
Remote operations exposed
listEmployees()findEmployeeById()addEmployee()updateEmployee()deleteEmployee()
Manual transactions and connection scope
Key rule
Each remote request gets its own JDBC connection. I do not share one global connection across all client threads.
- Open connection inside the remote request
- Set
autoCommit(false)for writes commit()on successrollback()on failure- Close the connection in
finally
Why this matters
- Safer under concurrent RMI requests
- Clear transaction boundary for each client action
- Database remains unchanged when validation fails
- Easy to reason about rollback behavior
Docker deployment on sh5
Compose services
local-appfor the original non-distributed versionrmi-serverfor the server and database accessrmi-clientfor normal scripted or interactive calls- extra clients for concurrency experiments
Practical deployment note
The remote Docker Hub mirror on sh5 returned EOF errors, so I built
a small self-contained runtime bundle and deployed that image instead of relying
on remote base-image pulls.
Docker engine runs the assignment stack.
Jar + minimal JRE + SQLite native library support.
The same app runs without installing Java on the host.
Concurrent transaction experiment
Scenario A
read + read + read
Check whether concurrent readers can proceed together.
Scenario B
read + insert + update
Observe how read and write requests interleave.
Scenario C
insert + update + delete
Push the system into competing writes and watch final consistency.
What I was looking for
- Execution order in the logs
- Whether writes block or serialize
- Whether commit and rollback keep the table consistent
Expected SQLite behavior
- Reads usually coexist well
- Writes are more limited and often serialize
- Lock waits are valid experimental evidence, not necessarily a bug
Results and verification
Smoke-test results
- Original local version runs
- RMI server starts successfully
list,find,add,update,deleteall work- The stack now runs on
sh5through Docker
Concurrent run result
In the test run, the system:
- updated
E2fromSyst. Anal.toProgrammer - inserted
E9 A. Chen Programmer - kept the database consistent
Key conclusion
The assignment goals were all covered: local execution, distributed RMI, manual transactions, and concurrent client behavior under SQLite.
Live demo plan
1. Show the running service
2. Read the current EMP table
3. Perform one write operation
4. Optional concurrency demo
Why this demo sequence works
- It proves the server is alive first.
- Then it shows that remote reads work.
- Finally, it shows that remote writes change the database state.
Takeaways
Technical lessons
- A local database app becomes distributed once the remote boundary is added.
- Transaction scope must be explicit and per request.
- SQLite can support this assignment well, but write concurrency is limited.
- Docker made the environment repeatable across machines.
Final statement
I did not just “make it run.” I restructured the original EMP program into a client/server design, made transactions explicit, and used concurrent clients to study real database behavior.