Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Improve the latency of analysis when there are some errors in cluster #1236

Open
2 tasks done
jxs1211 opened this issue Aug 23, 2024 · 7 comments
Open
2 tasks done

Comments

@jxs1211
Copy link

jxs1211 commented Aug 23, 2024

Checklist

  • I've searched for similar issues and couldn't find anything matching
  • I've discussed this feature request in the K8sGPT Slack and got positive feedback

Is this feature request related to a problem?

None

Problem Description

Recently I encountered some latency issues like the one below which caused a long wait for the response if there were some errors (around 30+ errors)in the cluster:

k8sgpt analyze -o json --kubecontext   1.10s user 0.10s system 3% cpu 30.487 total
jq .  0.04s user 0.01s system 0% cpu 30.495 total

Solution Description

I hope the latency will be limited to seconds under good network connectivity.

Benefits

It will improve the user experience, and make the analysis process more efficient.

Potential Drawbacks

No response

Additional Information

No response

@matthisholleville
Copy link
Contributor

Hi @jxs1211

I can reproduce the issue. The problem seems to come from the Service analyzer.

(matthisholleville) ➜  k8sgpt git:(main) ✗ time ./bin/k8sgpt analyze
2024/08/26 13:32:47 Analyzer Ingress took 57.249625ms
2024/08/26 13:32:47 Analyzer PersistentVolumeClaim took 57.525541ms
2024/08/26 13:32:47 Analyzer CronJob took 64.764125ms
2024/08/26 13:32:47 Analyzer Node took 162.466084ms
2024/08/26 13:32:47 Analyzer Deployment took 168.8345ms
2024/08/26 13:32:47 Analyzer ReplicaSet took 281.76075ms
2024/08/26 13:32:47 Analyzer StatefulSet took 501.487167ms
2024/08/26 13:32:48 Analyzer Pod took 1.263196583s
2024/08/26 13:33:23 Analyzer Service took 36.287712708s

I'm looking into optimizing it and adding a flag to display stats for each analyzer.

@matthisholleville
Copy link
Contributor

In my case, the Service analyzer needs to analyze 182 items. By adding more logs, I can see that each item takes approximately 1 second to be analyzed. Concurrency is managed at the analyzer level, but we might want to consider applying it within each analyzer as well. What do you think, @AlexsJones ?

Also, do you think it would be useful to have more detailed statistics, such as the number of items analyzed and the P90 of the execution time per item?

@AlexsJones
Copy link
Member

In my case, the Service analyzer needs to analyze 182 items. By adding more logs, I can see that each item takes approximately 1 second to be analyzed. Concurrency is managed at the analyzer level, but we might want to consider applying it within each analyzer as well. What do you think, @AlexsJones ?

Also, do you think it would be useful to have more detailed statistics, such as the number of items analyzed and the P90 of the execution time per item?

I think we should have the ability to either be selective on logging or turn it off.
If we split out analysers it will absolutely hammer the K8s API in/out of cluster

@matthisholleville
Copy link
Contributor

In this PR I've proposed a new option for displaying stats only.

It seems to me that it might be interesting to have information such as the number of items analyzed, for example.

@jxs1211
Copy link
Author

jxs1211 commented Sep 18, 2024

Hi @jxs1211

I can reproduce the issue. The problem seems to come from the Service analyzer.

(matthisholleville) ➜  k8sgpt git:(main) ✗ time ./bin/k8sgpt analyze
2024/08/26 13:32:47 Analyzer Ingress took 57.249625ms
2024/08/26 13:32:47 Analyzer PersistentVolumeClaim took 57.525541ms
2024/08/26 13:32:47 Analyzer CronJob took 64.764125ms
2024/08/26 13:32:47 Analyzer Node took 162.466084ms
2024/08/26 13:32:47 Analyzer Deployment took 168.8345ms
2024/08/26 13:32:47 Analyzer ReplicaSet took 281.76075ms
2024/08/26 13:32:47 Analyzer StatefulSet took 501.487167ms
2024/08/26 13:32:48 Analyzer Pod took 1.263196583s
2024/08/26 13:33:23 Analyzer Service took 36.287712708s

I'm looking into optimizing it and adding a flag to display stats for each analyzer.

@matthisholleville Good catch, so do we have any chance to add concurrency in each analyzer for handling many items situation

@AlexsJones
Copy link
Member

The analyzers themselves are concurrent, but our previous conversation was about making the routines within the analysers also task in parallel. The challenge here is going to be the API rate limit and back pressure

@jxs1211
Copy link
Author

jxs1211 commented Oct 4, 2024

The analyzers themselves are concurrent, but our previous conversation was about making the routines within the analysers also task in parallel. The challenge here is going to be the API rate limit and back pressure

Gotcha, so do we need more discussion on that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Proposed
Development

No branches or pull requests

3 participants