go-interview-practice is a self-contained coding challenge platform for Go. Fork the repo, run go run main.go inside web-ui/, and you get a local server at port 8080 with an in-browser editor, automated test execution, and a scoreboard driven by GitHub Actions.
Why I starred it
Most "interview prep" repos are just markdown files and a wish. This one ships actual infrastructure: a web server, a code execution engine, and a CI pipeline that auto-merges passing PRs and updates leaderboards. 2,192 stars and commits landing daily — it's not abandoned, people are actively submitting solutions.
The angle that caught my eye: the test runner doesn't use a sandbox service or an external judge. It spins up ephemeral Go modules on the server's own filesystem and runs go test directly. That's either brave or pragmatic depending on your threat model, and worth understanding.
How it works
The entry point is web-ui/main.go. It initializes six services — challenges, scoreboards, users, execution, packages, and AI — then hands them to a server that sets up routes. The service that does the heavy lifting is ExecutionService in web-ui/internal/services/execution.go.
When you submit code in the browser, RunCode does this:
// Create temporary directory for execution
tempDir, err := ioutil.TempDir("", "challenge-exec")
defer os.RemoveAll(tempDir)
// Write submitted code
codePath := filepath.Join(tempDir, "solution-template.go")
ioutil.WriteFile(codePath, []byte(code), 0644)
// Write the challenge's bundled test file
testPath := filepath.Join(tempDir, "solution_test.go")
ioutil.WriteFile(testPath, []byte(challenge.TestFile), 0644)
// Init a fresh Go module, install any detected imports, run tests
es.initGoModule(tempDir, challenge.ID)
es.installDependencies(tempDir, code, challenge.ID)
cmd := exec.Command("go", "test", "-v")
cmd.Dir = tempDir
output, err := cmd.CombinedOutput()
It creates a temp dir, writes your code alongside the locked test file, initializes a fresh go.mod, detects external imports in your submission and runs go get for them, then invokes go test -v. The whole thing runs synchronously and returns execution time in milliseconds. The test file is stored in the challenge struct loaded at startup — contributors can't modify it.
The challenge loader in challenge.go walks ../challenge-* directories from the web-ui directory, extracts the challenge number via regex, reads README.md for title and difficulty, then slurps solution-template.go and solution-template_test.go into memory. Everything lives in the filesystem; there's no database.
The individual challenge tests have an interesting pattern. Challenge 1's test file uses exec.Command("go", "run", "solution-template.go") inside each t.Run, feeding input via stdin and asserting on stdout. So tests aren't unit tests in the traditional sense — they build and execute the solution binary each time and compare standard output. For the concurrent BFS challenge (challenge-4), the function signature is a direct function call instead, which means there's no single test pattern across all challenges.
The AI layer in ai.go supports Gemini, OpenAI, and Claude via environment variables. Set AI_PROVIDER=gemini and GEMINI_API_KEY in a .env file in the web-ui directory and you get real-time code review and dynamically generated interview follow-up questions. The HTTP client has a 30-second timeout; model defaults to gemini-2.5-flash.
The GitHub Actions pipeline is where the community loop closes. Submitting a solution means opening a PR that adds your file to challenge-N/submissions/YOUR_USERNAME/solution-template.go. The pr-tests.yml workflow validates that you only touched your own submission directory, runs the tests, and auto-merges after two days if all checks pass. A separate update-scoreboards.yml job then re-runs every solution, updates SCOREBOARD.md in each challenge, regenerates SVG profile badges, and rebuilds the README leaderboard table.
Using it
git clone https://github.com/YOURUSERNAME/go-interview-practice.git
cd go-interview-practice/web-ui
go run main.go
# Server starting on http://localhost:8080
For challenge 4 (concurrent BFS), the function you implement looks like:
func ConcurrentBFSQueries(graph map[int][]int, queries []int, numWorkers int) map[int][]int {
// Must use goroutines + channels — tests enforce concurrency via performance assertions
results := make(map[int][]int)
// ...
}
The performance tests in that challenge actually verify you used concurrency — sequential implementations time out.
To enable AI review:
echo "AI_PROVIDER=gemini" > .env
echo "GEMINI_API_KEY=your_key_here" >> .env
Rough edges
The execution model runs untrusted code directly on the host machine via exec.Command. There's no container isolation, no resource limits, no timeout on individual test runs. This is fine if you're running it locally against your own solutions, which is the stated use case, but the self-hosted deployment option (Railway, etc.) with multiple users would need sandboxing.
The SaveSubmissionToFilesystem function in execution.go tries three different path strategies to find the right directory, which is a sign that the relative path between web-ui/ and the challenge directories causes friction in different run contexts. It works, but the fallback chain is fragile.
The package challenges (Gin, GORM, Fiber, Cobra, MongoDB, Echo) have noticeably fewer participants than the classic challenges — 47 active users versus 335. The leaderboard data also shows most package tracks are incomplete; only odelbos has finished all four Gin challenges and all five GORM challenges.
Documentation on adding new challenges covers the directory structure but doesn't explain the test file conventions clearly. The gap between challenge-1's stdio-based tests and challenge-4's direct function call tests would trip up a first-time contributor.
Bottom line
Useful for Go developers who want structured practice with automatic grading and a community leaderboard. The self-hosted execution engine is worth reading if you're building similar tooling — it's a clean example of ephemeral go module creation for sandboxed code eval, even if it lacks resource constraints.
