Whats unique about herbstluftwm? I am curious about what it has to offer to you verses other tilers.
I'm all for pettyness in most cases.. but uhh. This is a bit much don't ya think?
I feel like github should have verified repositories
How did they get the rope around him lol
Ok so, I think it was running on the wrong node and using thats resolv.conf which I did not update, but I am getting a new issue:
2025-05-02T21:42:30Z INF Starting tunnel tunnelID=72c14e86-612a-46a7-a80f-14cfac1f0764
2025-05-02T21:42:30Z INF Version 2025.4.2 (Checksum b1ac33cda3705e8bac2c627dfd95070cb6811024e7263d4a554060d3d8561b33)
2025-05-02T21:42:30Z INF GOOS: linux, GOVersion: go1.22.5-devel-cf, GoArch: arm64
2025-05-02T21:42:30Z INF Settings: map[no-autoupdate:true]
2025-05-02T21:42:30Z INF Environmental variables map[TUNNEL_TOKEN:*****]
2025-05-02T21:42:30Z INF Generated Connector ID: 7679bafd-f44f-41de-ab1e-96f90aa9cc34
2025-05-02T21:42:40Z ERR Failed to fetch features, default to disable error="lookup cfd-features.argotunnel.com on 10.90.0.10:53: dial udp 10.90.0.10:53: i/o timeout"
2025-05-02T21:43:30Z WRN Unable to lookup protocol percentage.
2025-05-02T21:43:30Z INF Initial protocol quic
2025-05-02T21:43:30Z INF ICMP proxy will use 10.60.0.194 as source for IPv4
2025-05-02T21:43:30Z INF ICMP proxy will use fe80::eca8:3eff:fef1:c964 in zone eth0 as source for IPv6
2025-05-02T21:42:40Z ERR Failed to fetch features, default to disable error="lookup cfd-features.argotunnel.com on 10.90.0.10:53: dial udp 10.90.0.10:53: i/o timeout"
kube-dns usually isnt supposed to give a i/o timeout when going to external domains, im pretty sure its supposed to forward it to another dns server, or do i have to configure that?
??? He said he talked to the principal multiple times
spiderunderurbed@raspberrypi:~/k8s $ kubectl get networkpolicy -A
No resources found
spiderunderurbed@raspberrypi:~/k8s $
No networkpolicies.
spiderunderurbed@raspberrypi:~/k8s $ kubectl get pods -A | grep -i dns
default pdns-admin-mysql-854c4f79d9-wsclq 1/1 Running 1 (2d22h ago) 4d9h
default pdns-mysql-master-6cddc8cd54-cgbs9 1/1 Running 0 7h49m
kube-system coredns-ff8999cc5-hchq6 1/1 Running 1 (2d22h ago) 4d11h
kube-system svclb-pdns-mysql-master-1993c118-8xqzh 3/3 Running 0 4d
kube-system svclb-pdns-mysql-master-1993c118-whf5g 3/3 Running 0 124m
spiderunderurbed@raspberrypi:~/k8s $
Ignore powerdns, its just extra stuff, but yeah coredns is running
spiderunderurbed@raspberrypi:~/k8s $ kubectl get endpoints -n kube-system
NAME ENDPOINTS AGE
kube-dns 172.16.246.61:53,172.16.246.61:53,172.16.246.61:9153 4d11h
metrics-server 172.16.246.45:10250 4d11h
traefik <none> 130m
spiderunderurbed@raspberrypi:~/k8s $
^ endpoints and services:
spiderunderurbed@raspberrypi:~/k8s $ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d11h
metrics-server ClusterIP 10.43.67.112 <none> 443/TCP 4d11h
traefik LoadBalancer 10.43.116.221 <pending> 80:31123/TCP,443:30651/TCP 131m
spiderunderurbed@raspberrypi:~/k8s $
It was my backend, turns out, it forwards /nextcloud onto the nextcloud service, which does not know what to do with it unless I set something like site-url to include that path. So I made a middleware to strip the prefix, but now it cannot access any of its files because it will use the wrong path.
Well, its kube-dns, and it simply, does not work, more specifically, it cannot resolve any external domains, I think it can resolve internal domains but I doubt thats working, but mainly it cant resolve external domains. I posted about it, here: https://lemmy.zip/post/36964791
Recently, it was fixed because I found the correct endpoint, and uhh, now it stopped working, I updated the endpoint to the newer one, but it went back to the original issue detailed in that post.
No, i want to replace kube-dns and coredns, and some of my applications will resolve the ip at my dns server, then try those ips within the server, but mainly I want to replace the current dns stack due to several issues.
I solved the issue, the jellyfin pod for some reason was connecting to the wrong endpoint for the internal kube-dns service, I fixed that, and also made it use the internal pods FQDN and it works.
SpiderUnderUrBed
0 post score0 comment score
Would signal also work?