-
-
Notifications
You must be signed in to change notification settings - Fork 61
feat: implement service logs command with colored output and strict ordering #99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Hey @tonyo @psviderski ! |
Great stuff 💪 Haven't looked too deep yet, but big +1 for following the docker / docker-compose semantics.
That might blow up the PR size, I'd suggest to leave it for a follow-up PR and focus on basic functionality and user interface here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the direction!
To give you some food for thoughts, here is a talk about grpc-proxy from one of the Sidero (Talos linux) developers who forked grpc-proxy and add support for 1-Many requests, including for streaming methods. We use this project.
I wonder, if we can encapsulate some of the complexity currently implemented on the client in a server side gRPC call ServiceLogs
. Note that this call could be broadcasted to N machines at the same time and it will return a stream that would combine all the streams from the machines. So basically all the multiplexing will be done by grpc-proxy.
What we'd need to do on the client is to reorder log entries if strict order is required. I believe this could reliably be done by using some buffer and periodic checkpoint entries when containers don't send logs. This way we can flush the next log entry once we received at least one entry from each machine.
This idea might not work though but feel free to investigate if you like.
// Read initial entries from each channel | ||
for _, ch := range machineChannels { | ||
select { | ||
case entry, ok := <-ch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a risk that a particular machine won't have any logs for a while so we will block here?
}) | ||
} | ||
default: | ||
// Channel is truly empty, don't re-add |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could it become non empty later if we're using the --follow
mode?
This one could be tricky. I have a feeling that we might need to send empty "checkpoint" entries on a regular interval to streams when there are no container logs. They may help distinguish the cases when there are issues with communicating with the machine and there are no logs from this machine.
|
||
message ContainerLogsResponse { | ||
// Stream type: 1 = stdout, 2 = stderr | ||
int32 stream_type = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI protobuf support enums:
uncloud/internal/machine/api/pb/cluster.proto
Lines 72 to 76 in 2c3bea6
enum RecordType { | |
UNSPECIFIED = 0; | |
A = 1; | |
AAAA = 2; | |
} |
This PR implements the
uc logs
command for viewing aggregated logs from all containers of a service across the Uncloud clusterPart 0
uc logs <service>
command that aggregates logs from all service containers--follow
,--tail
,--timestamps
,--since
,--until
Two Merge Models:
--strict-order
flag):Commands
Examples
Show details...
- Show the last 5 lines of each replicaPart 1 WIP
Implement a persistent log storage that saves container logs to disk on each machine, enabling access to historical logs even after containers are removed or restarted
related issue #12