-
Notifications
You must be signed in to change notification settings - Fork 897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
name=<NA> pid=-1 Unable to collect proc.name and proc.pid, etc. #3234
Comments
Ei @ ox01024 a possible cause here is that you are catching all possible events generated by Falco and so you are facing many drops. This is just a guess but looking at your logs I see events like If you just use Falco with the default configuration and default ruleset they should be disabled by default... you can check what you are enabling with sudo ./usr/bin/falco -c ./etc/falco/falco.yaml -r ./etc/falco/falco_rules.yaml -o engine.kind=modern_ebpf -o log_level=debug |
Thank you for your enthusiastic response. I will continue to investigate the issue based on the plan you provided. If the situation stabilizes, I will update the progress in a timely manner to help more friends who encounter similar problems. |
@Andreagit97 It is like this. - rule: Test
desc: >
Test
condition: >
evt.type=recvmsg
enabled: true
output: Test ( pid=%proc.pid connection=%fd.name lport=%fd.lport rport=%fd.rport fd_type=%fd.type fd_proto=fd.l4proto evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: WARNING
tags: [maturity_deprecated, host, container, network, mitre_lateral_movement, T1021.004] However, the results remain the same, so it should not be a problem caused by these events.
As mentioned in the above text, this issue was not found in the historical versions. Therefore, based on this situation, it is determined that it may be caused by this pull request: falcosecurity/libs#1058. The newly added get_cgroup_subsystems_v2 part leads to an error where some processes do not have a cgroup.controllers file. It is known that in /proc/1/root/run/calico/cgroup, the cgroup content cannot be found. |
Issues go stale after 90d of inactivity. Mark the issue as fresh with Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with Provide feedback via https://github.com/falcosecurity/community. /lifecycle stale |
Linux ubuntu22 6.5.0-28-generic #29~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 4 14:39:20 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
root@ubuntu22:~# docker info
Client:
Version: 24.0.5
Context: default
Debug Mode: false
Server:
Containers: 48
Running: 0
Paused: 0
Stopped: 48
Images: 12
Server Version: 24.0.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version:
runc version:
init version:
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.5.0-28-generic
Operating System: Ubuntu 22.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.712GiB
Name: ubuntu22
ID: f6a2ad8b-6601-48e8-8d90-fef119a4aa17
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
registry.ahcloud-private.com:5000
registry.storm.io
10.50.26.198:80
127.0.0.0/8
Live Restore Enabled: false
root@ubuntu22:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.12", GitCommit:"b058e1760c79f46a834ba59bd7a3486ecf28237d", GitTreeState:"clean", BuildDate:"2022-07-13T14:59:18Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
On Ubuntu deployed with docker + k8s, using Falco version greater than 0.36.1 (estimated), the default configuration does not collect process information such as proc.pid and proc.name.
Mitigation Measures Guidance
Tue Jun 4 21:29:15 2024: Falco version: 0.38.0 (x86_64)
Tue Jun 4 21:29:15 2024: Falco initialized with configuration files:
Tue Jun 4 21:29:15 2024: /etc/falco/falco.yaml
Tue Jun 4 21:29:15 2024: System info: Linux version 6.5.0-28-generic (buildd@lcy02-amd64-098) (x86_64-linux-gnu-gcc-12 (Ubuntu 12.3.0-1ubuntu1
22.04) 12.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) Add install-digwatch script template #2922.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 4 14:39:20 UTC 2Falco version: 0.38.0
Libs version: 0.17.1
Plugin API: 3.5.0
Engine: 0.40.0
Driver:
API version: 8.0.0
Schema version: 2.0.0
Default driver: 7.2.0+driver
root@ubuntu22:
# falco --support | jq .system_info22.04) 12.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #29Tue Jun 4 21:29:31 2024: Falco version: 0.38.0 (x86_64)
Tue Jun 4 21:29:31 2024: Falco initialized with configuration files:
Tue Jun 4 21:29:31 2024: /etc/falco/falco.yaml
Tue Jun 4 21:29:31 2024: System info: Linux version 6.5.0-28-generic (buildd@lcy02-amd64-098) (x86_64-linux-gnu-gcc-12 (Ubuntu 12.3.0-1ubuntu1
22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 4 14:39:20 UTC 222.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 4 14:39:20 UTC 2"Tue Jun 4 21:29:31 2024: Loading rules from file /etc/falco/falco_rules.yaml
Tue Jun 4 21:29:31 2024: Loading rules from file /etc/falco/falco_rules.local.yaml
{
"machine": "x86_64",
"nodename": "ubuntu22",
"release": "6.5.0-28-generic",
"sysname": "Linux",
"version": "#29
}
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
Kubernetes
The text was updated successfully, but these errors were encountered: