Skip to content(if available)orjump to list(if available)

Show HN: Telescope – an open-source web-based log viewer for logs in ClickHouse

Show HN: Telescope – an open-source web-based log viewer for logs in ClickHouse

74 comments

·February 26, 2025

Hey everyone! I’m working on Telescope - an open-source web-based log viewer designed to make working with logs stored in ClickHouse easier and more intuitive.

I wasn’t happy with existing log viewers - most of them force a specific log format, are tied to ingestion pipelines, or are just a small part of a larger platform. Others didn’t display logs the way I wanted.

So I decided to build my own lightweight, flexible log viewer - one that actually fits my needs.

Check it out:

    Video demo: https://www.youtube.com/watch?v=5IItMOXwugY

    GitHub: https://github.com/iamtelescope/telescope

    Live demo: https://telescope.humanuser.net

    Discord: https://discord.gg/rXpjDnEc

piterrro

There's also Logdy (https://github.com/logdyhq/logdy-core) that can work with raw files and comes with a UI as well in a single precompiled binary so no need for installs and setups. If you're looking for a simple solution for browsing log files with a web UI, this might be it! (I'm the author)

corytheboyd

Heyo I’ve noticed Lodgy come up a few times on HN now, and was curious if you explored making it a proper desktop application instead of a two-part UI and CLI application. Did you rule that out for some reason?

piterrro

I'm not ruling that out, however there was no user feedback that's the use case honestly. So far users love that they can just drop a binary on a remote server and sping up a web UI. Similar with the local env. The nature of Logdy is that it's primarily designed to work in the CLI. What would be the use case for a desktop app?

nh2

It would be great if the logs could describe a bit what exactly one has to do to use this as an alternative to Grafana Loki.

How do I get my logs (e.g. local text files from disk like nginx logs, or files that need transformation like systemd journal logs) into ClickHouse in a way that's useful for Telescope?

What kind of indices do I have to configure so that queries are fast? Ideally with some examples.

How can I make that full-text substring search queries are fast (e.g. "unexpected error 123")? When I filter with regex, is that still fast / use indices?

From the docs it isn't quite clear to me how to configure the system so that I can just put a couple TB of logs into it and have queries be fast.

Thanks!

r0b3r4

Telescope is primarily focused on log visualization, not on log collection or preparing ClickHouse for storage. The system does not currently provide (and I think will not ever) built-in mechanisms for ingesting logs from any sources.

I will consider providing a how-to guide on setting up log storage in ClickHouse, but I’m afraid I won’t be able to cover all possible scenarios. This is a highly specific topic that depends on the infrastructure and needs of each organization.

If you’re looking for a all-in-one solution that can*both collect and visualize logs, you might want to check out https://www.highlight.io or https://signoz.io or other similar projects.

And also, by the way, I’m not trying to create a "Grafana Loki killer" or a "killer" of any other tool. This is just an open source project - I simply want to build a great log viewer without worrying about how to attract users from Grafana Loki or Elastic or any other tool/product.

nh2

I think such a guide would be great.

My perspective:

A lot of people who operate servers (including me) just want to view and search their logs -- fast and convenient. Your tool provides that. They don't care about whether the backend uses ClickHouse or Postgres or whatever, that's just a pesky detail. They understand they may have to deal with it to some extent, but they don't want to have to become experts at those, and to conclude everything by themselves, just to read their logs.

Also, those things are general-purpose databases, so they don't tell the user how to best set them up so your tool can produce results fast and convenient. So currently, neither side helps the user with that.

That's why it's best if your tool's docs gives some basic tips on how to achieve the most commonly desired goals: Some basic way to get logs into the backend DB (if there's a standard way to do that for text log files and journald, probably fine to just link it), and docs on what indices Telescope needs to be faster than grep for typical log search tasks (ideally with some quick snippet or link on how to set those up, for people who haven't used ClickHouse before).

So overall, it's fine if the tool doesn't do everything. But it should say what it needs to work well.

null

[deleted]

sleepybrett

As someone who has never worked anywhere that tried it out, what do you not like about loki. I've been stuck in the very expensive splunk and opensearch/kibana mines for many years and I find it an amazingly frustrating place to be. I honestly find that I can better debug via logs using grep than either of those tools.

nh2

Loki works fine for what it does; the problem is what it lacks.

It doesn't do full-text search indices. So if you just search for some word across all your logs (to find eg when a rare error happened), it is very slow (it runs the equivalent of grep, at 500 MB/s on my machine). If you have a couple TB, it takes half an hour!

As you say, even plain grep is usually faster for such plain linear search.

I want full-text indices so that such searches take milliseconds, or a couple seconds at most.

sleepybrett

see to me, having at one point been responsible for maintaining an ES instance for logs (and exporters and all the other bits) I feel like the prices you pay in engineering hours and hardware costs to maintain all those indexes while keeping ES from absolutely melting down is way too high.

I think grep is amazing but yes if you unleash it on 'all the logs' without narrowing yourself down to a time frame first or some other taxonomy is going to be slow. This seems like a skill issue, frankly.

Also full text indexes for all the things are generally FASTER of course, but seconds/milliseconds? How much hardware are you throwing at logs. Most only go to logs in an emergency, during an incident and the like. How much are you paying just to index a bunch of shit that will probably never even be looked at, and how much are you paying for hardware to run queries on those indexes that will be largely idle.

The problems with ES/Splunk for logs is that they were not designed for logs, so they are both, in my view, overkill AND underkill for the task. Full fuzzy text serch is probably overkill, the UI for the task of dealing with log data is underkill. (The cloud bills are certainly overkill)

I'm currently doing platform engineering at a company in the top half of the fortune 500. Honestly, probably about 90-95% of the time when I'm helping a team troubleshoot their service on kubernetes I'm using the kubectl `stern` plugin (shows log streams from all pods that match a label query) and grep/sed/awk/jq if it's ongoing, it's just waaaaay more responsive. If it's a 'weird thing happened last night, investigate' task and I have to go to Kibana it's just a much worse experience overall.

charrondev

On the naming front telescope is already use for a log viewer https://laravel.com/docs/11.x/telescope

If I search telescope logs on google that’s the top result for me.

vortegne

Unfortunate name choice, as @csh602 mentioned

Viewer looks pretty good though. Reminds me of DataDog UI, but not as slow. Will play around more, thanks!

r0b3r4

As we all know, naming is an unsolvable problem in IT :)

Regarding performance - 95% of Telescope's speed depends on how fast your ClickHouse responds. If you have a well-optimized schema and use the right indexes, Telescope's overhead will be minimal.

azophy_2

Is there any comprehensive guide in building observability stack using otel, clickhouse, and grafana? I think this is a solid stack for logging & tracing, but I've been looking into it but haven't found any authoritative reference for this particular stack (unlike ELK & LGTM stack)

tacker2000

Looks cool I might try it out!

I need a central place, something simple where I can actually read the contents of the logs that are generated by the dozen of services that I run for clients, etc… instead of stupidly SSH’ing to every server.

Does this fit the use case?

I tried Loki once but it was painful to set up and more geared toward aggregating events and stats.

r0b3r4

Thanks! Telescope is more focused on displaying logs and providing access to them rather than handling log ingestion. In the future, I plan to support various sources like Docker, k8s, and files to improve the local development workflow. However, it's unlikely that Telescope will support fetching logs from remote servers via SSH, as that's not its primary use case.

xorcist

If all you want is the plaintext logs, there's no need to bother with special products. Just point syslog in the right direction as if it was 1995. Everything can log to syslog already. Things like Splunk, Graylog and Kibana are mostly for visualization and query interfaces.

walth

I'd recommend VictoriaLogs and shipping to via Vector

dengolius

I also recommend to not hesitate to use other log shippers as well as VictoriaLogs support ingestion not only from Vector - see https://docs.victoriametrics.com/victorialogs/data-ingestion...

homebrewer

Graylog is a pretty standard solution to your problems (I believe), although they've been closing down their licensing more and more as time goes on.

piterrro

I'm author of Logdy: https://logdy.dev/ https://github.com/logdyhq/logdy-core It comes as a precompiled binary you can download/deploy on the server and use to browse larger log files. I suggest you take a look!

iwanhae

I’m curious to know what makes the Loki installation process so painful.

I’m interested in learning more about the software installation experience.

samsk

Only problematic thing might be relatively frequent storage changes (like they like to deprecate primary storage driver), otherwise its IMHO easy to setup. I'm running it on several projects, because it doesn't needs beefy machine like Elastic or even ClickHouse.

perteraul

genuinely wondering if https://multiplayer.app would work for you.

note: I'm part of the Multiplayer team.

PeterZaitsev

Looks cool!

If you're looking for this kind of UI also check out Coroot https://github.com/coroot/coroot which has awesome UI for logs and OpenTelemetry traces and also stores data in Clickhouse

mikeshi42

This looks pretty cool, I love seeing more clickhouse-native logging platforms springing up! It's a surprisingly underrated platform to build on when I talk to other engineers.

I'm one of those authors of an existing log viewer (hyperdx) and was curious if we were one of those platforms that didn't fit your needs? Always love learning what use cases inspire different approaches.

kbumsik

How is it different from Signoz, a complete observability stack (including Logs) built on top of Clickhouse?

r0b3r4

Telescope is focused purely on viewing logs for existing data. It doesn’t enforce any specific ingestion setup or schema and doesn’t support traces or session storage.

You can think of it as just one part of a logging platform, where a full platform might consist of multiple components like a UI, ingestion engine, logging agent, and storage. In this setup, Telescope is only the UI.

darkstar_16

I like how this is mostly based on the Kibana UI. Makes easier to convince other people to move to it.

r0b3r4

To be honest, I was more inspired by DataDog :)

danmur

I've used graylog the most so that's what it looks like to me :P. I like how you can do a bunch of extraction stuff right there in the query interface though, that's awesome. It seems like a very thoughtful UI.

sleepybrett

Honestly that pushes me away from it. I find kibana to be a very frustrating experience.

mikeshi42

(not op) curious what you find frustrating about it?

sleepybrett

at the enterprise scale on the backend you end up paying for a bunch of indexing you will likely never use. On top of that you spend a LOT of money in engineering hours setting up indexes for many teams all with different log formats so the whole thing doesn't just melt down.

On the kibana side, their query language is unshared by any other tool, at least any that I use, meaning that in the middle of an outage I end up chasing my tail reading docs on how to query what you want. The returns are often slow and it's very hard to just export the logs you do find to text files so you can ingest them into other tools.

I mean I came up on cat/gerp/awk/sed/less/tail/(more recently jq for json logs) .. it wasn't perfect but it was RESPONSIVE and portable.

I just think that tools like ES/Splunk weren't conceived for dealing with logs (especially if your logs come in many formats) and are both overkill and at the same time underkill for the task. It's like using a ball peen hammer to drive nails, you can certainly DO it, but a claw hammer is cheaper and a more ergonomic experience.

smjburton

Would this also work with something like Plausible (https://github.com/plausible/analytics) which uses ClickHouse to store web analytics data, or is it primarily for log data?

r0b3r4

Despite the fact that Telescope is focused on application log data, it could be used for any type of data as long as it's stored in ClickHouse and has some time fields.

At the moment, I have no plans to support arbitrary data visualization in Telescope, as I believe there are better BI-like tools for that scenario.

smjburton

Yeah that's fair, thank you.

new_user_final

Rollbar has a feature to upload JavaScript sourcemaps files. When I am viewing logs from minified js files, it automatically apply sourcemaps and correctly shows line number.

Is there any open source tool that does the same?