Some notes on Grafana Loki's new "structured metadata"
16 comments
·March 16, 2025DeathArrow
suraci
1. It is extremely resource-efficient.
2. It has a convenient and simple query language.
3. It works very well with traces and metrics.
the pain part:
1. It struggles to query logs over a wide time range.
2. Its indexing (or labeling) capabilities are very limited, similar to Prometheus.
3. Due to 1 and 2, it is difficult to configure and use correctly to avoid errors related to usage limits (e.g., maximum series limits).
feydaykyn
Compared over Kibana, we experience: - 3x reduced costs - no more index corruption because a key changed type - slower performance for queries over 1 day, especially when non optimized without any filtering - non intuitive ui/ux
So good but not perfect! When we have the time we'll look for alternatives
kbouck
Re: storage, Kibana (Elastic) has a new (as of v8.17) "logsdb" index mode which claims to be ~2.5x more storage efficient than previous options.
Unroasted6154
Loki was much cheaper to run in my experience, using S3 storage. And you could scale the parts you needed dynamically in K8s.
Elastic was kind of a resource hog and much more expensive for the same amount of data.
That might be dependent on your use case though.
weitzj
From the Enterprise Perspektive at least for my use cases(fine grained permissions using extra id) , elasticsearch with kibana always had a solution available.
For grafana cloud and Loki you can close to a good usability with LBAC (label based access control) but you still need have many data sources to map onto each “team view” to make it user friendly.
What is missing for me is like in elastic a single datasource for all logs which every team member across all teams can see and you scope out the visibility level with LBAC
ohgr
Kibana + ElasticSearch was a mess for us. Was glad to get rid of it. Cost a fortune to run and was time consuming. Loki conversely doesn’t even show up on our costs report (other than the S3 bucket) and requires very little if any maintenance!
Also out of box configuration sinks 1TB/hr quite happily in microservices mode.
jiveturkey
(2024)
important because the title includes _new_
kbouck
It's also not ideal to have a different query language for different Grafana datastores (LogQL, PromQL, TraceQL). Are there any plans on making a unified Grafana query language?
pjd7
Unifying things slows engineers down, so probably not (for some time).
ohgr
Not much I agree with in this article. Seems to be based on little operational experience with the product, particular indicated by a couple of major mistakes and assumptions in the article (compacting does happen, didn't read the manual about deployment configurations clearly).
Loki has its idiosyncrasies but they are there for a good reason. Anyone who has sat there waiting hours for a Kibana or Splunk query to run to get some information out will know what I'd referring to. You don't dragnet your entire log stream unless your logs are terrible, which needs to be fixed, or you don't know when something happened, which needs fixing. I watch many people run queries that scan terabytes of data with gay abandon on a regular basis on older platforms and still never get what they need out.
The structured metadata distinction is important because when you do a query against that you are not using an index, just parsed out data. That means explicitly you're not filtering, you're scanning and that is expensive.
If you have a problem with finding things, then it's not the logging engine, it's the logs!
slekker
A disclaimer is that OP is CEO of another company in the same sector
Did someone use both Grafana Loki and Kibana? Does it have any advantages over Kibana? I am mostly interested in resource usage and versatility of filtering.
In Kibana, if something is there I will find it with ease and it doesn't take a lot of time to investigate issues in a microservice based application. It is also quite fast.