Instant Kubernetes-Native Application Observability

Overview

Pixie!


Docs Slack Mentioned in Awesome Kubernetes Mentioned in Awesome Go Build Status codecov


What is Pixie?

Pixie

Pixie gives you instant visibility by giving access to metrics, events, traces and logs without changing code.

Try our community beta and join our community on slack.


Table of contents

Quick Start

Review Pixie's requirements to make sure that your Kubernetes cluster is supported.

Signup

Visit our product page and signup with your google account.

Install CLI

Run the command below:

bash -c "$(curl -fsSL https://withpixie.ai/install.sh)"

Or see our Installation Docs to install Pixie using Docker, Debian, RPM or with the latest binary.

(optional) Setup a sandbox

If you don't already have a K8s cluster available, you can use Minikube to set-up a local environment:

  • On Linux, run minikube start --cpus=4 --memory=6000 --driver=kvm2 -p=. The default docker driver is not currently supported, so using the kvm2 driver is important.

  • On Mac, run minikube start --cpus=4 --memory=6000 -p=.

More detailed instructions are available here.

Start a demo-app:

🚀 Deploy Pixie

Use the CLI to deploy the Pixie Platform in your K8s cluster by running:

px deploy

Alternatively, you can deploy with YAML or Helm.


Check out our install guides and walkthrough videos for alternate install schemes.

Get Instant Auto-Telemetry

Run scripts with px CLI

CLI Demo


Service SLA:

px run px/service_stats


Node health:

px run px/node_stats


MySQL metrics:

px run px/mysql_stats


Explore more scripts by running:

px scripts list


Check out our pxl_scripts folder for more examples.


View machine generated dashboards with Live views

CLI Demo

The Pixie Platform auto-generates "Live View" dashboard to visualize script results.

You can view them by clicking on the URLs prompted by px or by visiting:

https://work.withpixie.ai/live


Pipe Pixie dust into any tool

CLI Demo

You can transform and pipe your script results into any other system or workflow by consuming px results with tools like jq.

Example with http_data:

px run px/http_data -o json| jq -r .

More examples here


To see more script examples and learn how to write your own, check out our docs for more guides


Contributing

Refer to our contribution guide!

Under the Hood

Three fundamental innovations enable Pixie's magical developer experience:

Progressive Instrumentation: Pixie Edge Modules (“PEMs”) collect full body request traces (via eBPF), system metrics & K8s events without the need for code-changes and at less than 5% overhead. Custom metrics, traces & logs can be integrated into the Pixie Command Module.

In-Cluster Edge Compute: The Pixie Command Module is deployed in your K8s cluster to isolate data storage and computation within your environment for drastically better intelligence, performance & security.

Command Driven Interfaces: Programmatically access data via the Pixie CLI and Pixie UI which are designed ground-up to allow you to run analysis & debug scenarios faster than any other developer tool.

For more information on Pixie Platform's architecture, check out our docs or overview deck

Resources

About Us

Pixie was started by a San Francisco based startup, Pixie Labs Inc. Our north star is to build a new generation of intelligent products which empower developers to engineer the future. We were acquired by New Relic in 2020.

New Relic, Inc. open sourced Pixie in April 2021.

License

Pixie is licensed under Apache License, Version 2.0.

Comments
  • Self-Hosted Pixie Install Script

    Self-Hosted Pixie Install Script

    Is your feature request related to a problem? Please describe. We would like to have an install experience for the self-hosted version of Pixie that is as easy to use as the one hosted on withpixie.ai.

    Additional context Our team has been busy at work this month open sourcing Pixie's source code, docs, website, and other assets, We are also actively applying to be a CNCF sandbox project!

    One of our last remaining items is to publish an install script to deploy a self-hosted version of Pixie.

    Who offers a hosted version of Pixie?

    New Relic currently offers a 100% free hosted version of Pixie Cloud. This hosting has no contingencies and will be offered indefinitely to the Pixie Community. All the code used for hosting is open source, including out production manifest files.

    What will the Self-Hosted install script do?

    The Self-Hosted install script will deploy Pixie Cloud so that you can use Pixie without any external dependencies. This is the exact version of Pixie Cloud we deploy, so it'll behave exactly as the hosted version, but will require management/configuration.

    What is the timeline? 

    Good question. :) We had planned to open source this script by 5/4. Unfortunately, we didn’t make it. We need more time to ensure that the Pixie Cloud deploy script will be just as easy to install Pixie Cloud as it is to install the hosted version of Pixie (in < 2 minutes!)

    But I really want to run a Self-Hosted Pixie...now!

    Technically you can build and run a self-hosted Pixie using Skaffold. Check out:

    https://github.com/pixie-labs/pixie/blob/main/skaffold/skaffold_cloud.yaml https://github.com/pixie-labs/pixie/tree/main/k8s/cloud https://github.com/pixie-labs/pixie/tree/main/k8s/cloud_deps

    These directions are not fully documented and the team is choosing to focus on quickly delivering the self-hosted install script. We'll constantly be iterating on the documentation to make the project more open source friendly.

    opened by zasgar 22
  • google login hangs

    google login hangs

    Trying the pixie online installer. After signing up with google, login hangs forever with:

    Authenticating
    Logging in...
    

    To Reproduce Steps to reproduce the behavior:

    1. Go to signup, use google
    2. Login with google

    Expected behavior To be logged in

    kind/bug priority/backlog triage/not-reproducible 
    opened by Morriz 17
  • [Doc issue] no ingress installed so dev_dns_updater did nothing

    [Doc issue] no ingress installed so dev_dns_updater did nothing

    Describe the bug I've been followed the document to deploy pixie cloud, and setup-dns section would update /etc/hosts if there is any ingress rules in k8s cluster. But it didn't have!

    ➜  pixie git:(main) ✗ kubectl get ing
    No resources found in default namespace.
    ➜  pixie git:(main) ✗ kubectl get ing -n plc
    No resources found in plc namespace.
    

    And it of course doesn't change anything:

    ➜  pixie git:(main) ✗ ./dev_dns_updater --domain-name="dev.withpixie.dev"  --kubeconfig=$HOME/.kube/config --n=plc
    INFO[0000] DNS Entries                                   entries="dev.withpixie.dev, work.dev.withpixie.dev, segment.dev.withpixie.dev, docs.dev.withpixie.dev" service=cloud-proxy-service
    INFO[0000] DNS Entries                                   entries=cloud.dev.withpixie.dev service=vzconn-service
    

    It didn't change /etc/hosts file!

    To Reproduce

    Expected behavior Should update /etc/hosts and we could visit dev.withpixie.dev in browser.

    Screenshots

    Logs

    App information (please complete the following information):

    • Pixie version: master branch
    • K8s cluster version: minikube on macOS 10.15.7 k8s version v1.22.2

    Additional context

    opened by Colstuwjx 12
  • Compile error, missing HTTP Tables.

    Compile error, missing HTTP Tables.

    Describe the bug Cannot run any scripts due to a HTTP Event module not found?

    Script compilation failed: L222 : C22  Table 'http_events' not found.\n
    

    To Reproduce Steps to reproduce the behavior: Install fresh version of Pixie on Minikube Cluster

    Expected behavior Pixie scripts to execute

    Screenshots image image

    Logs Please attach the logs by running the following command:

    ./px collect-logs (See Zip File) 
    

    pixie_logs_20210505024739.zip App information (please complete the following information):

    • Pixie version: 0.5.3+Distribution.0ff53f6.20210503183144.1
    • K8s cluster version: v1.20.2
    opened by WarpWing 12
  • Can't install pixie to completely air gapped environment

    Can't install pixie to completely air gapped environment

    Describe the bug Can't install pixie to completely air gapped environment.

    To Reproduce Currently I'm trying to install it via YAML scheme. I've already pushed all images mentioned in manifests generated on extract manifests step to my local artifactory and replaced original images links with local ones, but during installation pixie still tries to download some images (e.g. busybox:1.28.0-glibc and nats:1.3.0) from the internet.

    Expected behavior Be able to install pixie to self-hosted k8s cluster with no access to the internet.

    Logs Please attach the logs by running the following command:

    [[email protected] pixie_yamls]# kubectl get pods -n pl
    NAME                                      READY   STATUS                       RESTARTS   AGE
    etcd-operator-6c6f8cb48d-q5t8q            1/1     Running                      0          43m
    kelvin-6c67584687-pwlrg                   0/1     Init:0/1                     0          42m
    nats-operator-7bbff5c756-tt2rl            1/1     Running                      0          43m
    pl-etcd-zs25zbm5ln                        0/1     Init:ImagePullBackOff        0          41m
    pl-nats-1                                 0/1     ImagePullBackOff             0          42m
    vizier-certmgr-58d97fd6b5-8wp9n           0/1     CreateContainerConfigError   0          42m
    vizier-cloud-connector-74c5c84487-m4bmq   1/1     Running                      1          42m
    vizier-metadata-6bc96dd78-g9brg           0/1     Init:0/2                     0          42m
    vizier-pem-bv858                          0/1     Init:0/1                     0          42m
    vizier-pem-dktqv                          0/1     Init:0/1                     0          42m
    vizier-pem-ftd66                          0/1     Init:0/1                     0          42m
    vizier-pem-gmrfq                          0/1     Init:0/1                     0          42m
    vizier-pem-j7xmx                          0/1     Init:0/1                     0          42m
    vizier-pem-jxl7j                          0/1     Init:0/1                     0          42m
    vizier-pem-kcfbf                          0/1     Init:0/1                     0          42m
    vizier-pem-mgzgj                          0/1     Init:0/1                     0          42m
    vizier-pem-v7k7q                          0/1     Init:0/1                     0          42m
    vizier-proxy-8568c9bd48-fdccm             0/1     CreateContainerConfigError   0          42m
    vizier-query-broker-7b74f9cbdc-265m4      0/1     Init:0/1                     0          42m
    
    [[email protected] pixie_yamls]# kc describe pod pl-etcd-zs25zbm5ln -n pl
    Name:         pl-etcd-zs25zbm5ln
    Namespace:    pl
    ...
    Events:
      Type     Reason     Age                  From                             Message
      ----     ------     ----                 ----                             -------
      Normal   Scheduled  56m                  default-scheduler                Successfully assigned pl/pl-etcd-zs25zbm5ln to xxx
      Warning  Failed     55m                  kubelet, xxx  Failed to pull image "busybox:1.28.0-glibc": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: read tcp 192.168.0.33:34516->23.23.116.141:443: read: connection reset by peer
      Warning  Failed     55m                  kubelet, xxx  Failed to pull image "busybox:1.28.0-glibc": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: read tcp 192.168.0.33:59176->54.224.119.26:443: read: connection reset by peer
      Warning  Failed     55m                  kubelet, xxx  Failed to pull image "busybox:1.28.0-glibc": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: read tcp 192.168.0.33:42888->107.23.149.57:443: read: connection reset by peer
      Warning  Failed     54m (x4 over 55m)    kubelet, xxx  Error: ErrImagePull
      Normal   Pulling    54m (x4 over 55m)    kubelet, xxx  Pulling image "busybox:1.28.0-glibc"
      Warning  Failed     54m                  kubelet, xxx  Failed to pull image "busybox:1.28.0-glibc": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: read tcp 192.168.0.33:41714->34.238.187.50:443: read: connection reset by peer
      Normal   BackOff    45m (x43 over 55m)   kubelet, xxx  Back-off pulling image "busybox:1.28.0-glibc"
      Warning  Failed     48s (x234 over 55m)  kubelet, xxx  Error: ImagePullBackOff
    
    
    [[email protected] pixie_yamls]# kc describe pod pl-nats-1 -n pl
    Name:         pl-nats-1
    Namespace:    pl
    ...
    Events:
      Type     Reason       Age                    From                             Message
      ----     ------       ----                   ----                             -------
      Normal   Scheduled    57m                    default-scheduler                Successfully assigned pl/pl-nats-1 to yyy
      Warning  FailedMount  57m (x6 over 57m)      kubelet, yyy  MountVolume.SetUp failed for volume "server-tls-certs" : secret "service-tls-certs" not found
      Warning  Failed       56m                    kubelet, yyy  Failed to pull image "nats:1.3.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: read tcp 192.168.0.18:32860->3.220.36.210:443: read: connection reset by peer
      Warning  Failed       56m                    kubelet, yyy  Failed to pull image "nats:1.3.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: read tcp 192.168.0.18:52026->107.23.149.57:443: read: connection reset by peer
      Warning  Failed       2m26s (x227 over 56m)  kubelet, yyy Error: ImagePullBackOff
    

    App information (please complete the following information):

    • Pixie version: Pixie CLI 0.5.8+Distribution.a09aa96.20210506210658.1
    • K8s cluster version: Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-09T11:26:42Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:04:18Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
    kind/feature area/deployment triage/accepted 
    opened by blencoff 11
  • JAVA profiling is not enabled by default as expected.

    JAVA profiling is not enabled by default as expected.

    I followed the tutorial. Passed the java -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -XX:+PreserveFramePointer

    Also while compiling used the apply plugin: 'java-library' compileJava { options.debug = true options.debugOptions.debugLevel = "source,lines,vars" }

    to enable java debug symbols.

    Still getting the hexadecimal values instead of the method names.

    kind/bug priority/critical-urgent area/deployment triage/accepted 
    opened by c3-pranjaysagar 10
  • Pixie is missing data about many pods and services in the cluster

    Pixie is missing data about many pods and services in the cluster

    Describe the bug

    I encountered an issue in a self-hosted installation where Pixie is missing information about the cluster

    E.g. When I checked the pods in a namespace using the px/namespace script from the UI and CLI, only 8 pods were shown. But when I checked from kubectl , I saw 90+ pods. Similarly, Pixie showed 6 services whereas kubectl showed 40+ services.

    Also, at times, when I try to view details of a Pod in the Pixie UI, there is no data for it. E.g. I selected a running pod from the cluster and entered it's name in the px/pod script in the UI. But nothing was shown for it. I could only see a No data available for inbound_requests table message. (All the widgets in px/pod had the same no data available error message). The start time I set in the Pixie UI was less than the pod's uptime as well. Screenshot 2022-08-10 at 20 13 25

    I also noticed that autocomplete in the Pixie UI doesn't show the correct resource at times. E.g. In px/pod, the pod that is shown by autocomplete does not exist in the cluster (Probably replaced by a new pod).

    I noticed the following in the deployment vizier-metadata pod and vizier-cloud-connector had many restarts. When I checked the pod, the state change reason for the container was shown as Error

    At times, newly created pods appear in Pixie. So this doesn't seem to be a case where Pixie is unable to get any information at all about new pods

    To Reproduce Not sure how to reproduce this

    Expected behavior Expected to see all pods and services of the cluster in Pixie

    Logs Log containing "error" in vizier-metadata. I am including all the repeated log lines as I want to show that they have been printed within a short interval (Multiple lines during some seconds as well)

    kubectl logs -f vizier-metadata-0 -n pl | grep -i Error
    time="2022-07-13T17:34:32Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:32Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:33Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:33Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:33Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:33Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:33Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:33Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:33Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:33Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:33Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:33Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:34Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:34Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:34Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:34Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:34Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:34Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:34Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:34Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:35Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:35Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:35Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:35Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:36Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:36Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:34:39Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:56:40Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T17:56:42Z" level=info msg="Failed to get update version for topic" error="<nil>"
    time="2022-07-13T18:04:05Z" level=info msg="Failed to get update version for topic" error="<nil>"
    

    vizier-cloud-connector had the following error repeated multiple times

    time="2022-07-13T18:34:46Z" level=info msg="failed vizier health check, will restart healthcheck" error="rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: INTERNAL_ERROR"
    time="2022-07-13T18:34:46Z" level=info msg="failed vizier health check, will restart healthcheck" error="context canceled"
    

    App information (please complete the following information):

    • Pixie version: 0.7.14
    • K8s cluster version: 1.21.9
    • Node Kernel version:
    • Browser version: Chrome Version 103.0.5060.114 (Official Build) (x86_64)

    Additional context

    kind/support triage/needs-information area/k8s-metadata 
    opened by nilushancosta 9
  • gRPC-c data parsing

    gRPC-c data parsing

    Stirling now registers on perf buffers where the gRPC-c eBPF module writes data to. There are 3 buffers:

    1. gRPC events
    2. gRPC headers
    3. close events

    The logic of handling gRPC sessions works for Golang events. This logic is now used for gRPC-c events as well. The data that the gRPC-c eBPF module passes to the user-space differs from the data that the Golang gRPC eBPF module passes to the user-space. This PR is basically an abstraction layer that "translates" gRPC-c eBPF events to the known format of Golang gRPC events.

    gRPC-c events are still not enabled; They will be enabled in the next PR, where the needed probes will be attached by the UProbeManager. However, the gRPC-c eBPF program is now compiled; because in order for the code to find the perf buffers, they must exist.

    opened by orishuss 9
  • px deploy failed flatcar linux kubernetes cluster

    px deploy failed flatcar linux kubernetes cluster

    Describe the bug A clear and concise description of what the bug is. $ px deploy (failed)

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error fatal failed to fetch vizier versions error=open /home/core/.pixie/auth.json: no such file or directory

    Expected behavior A clear and concise description of what you expected to happen. pixie should be running properly

    Screenshots If applicable, add screenshots to help explain your problem. Please make sure the screenshot does not contain any sensitive information such as API keys or access tokens.

    Logs Please attach the logs by running the following command: px deploy ./px collect-logs

    
    **App information (please complete the following information):**
    - Pixie version
    - K8s cluster version v1.19.2
    
    fatal failed to fetch vizier versions error=open /home/core/.pixie/auth.json: no such file or directory
    
    Please help
    
    
    **Additional context**
    Add any other context about the problem here.
    
    opened by 4ss3g4f 9
  • Add pixie-operator pod logs to collect-logs cmd

    Add pixie-operator pod logs to collect-logs cmd

    It essentially makes a new query to the kube-apiserver with the olm.catalogSource=pixie-operator-index label filter and merges the new results with the old.

    Fixes #559

    opened by victor-timofei 8
  • gRPC-c probes

    gRPC-c probes

    1. Initiate PerCPU variables of the gRPC-c eBPF module.
    2. Look for new processes that use the gRPC-c library. Determine the library's version by its MD5 hash. Currently, only 4 library hashes have been added. In the future, we will need to either develop a mechanism that automatically finds hashes, or determine the library's version dynamically.
    3. When a process with a gRPC-c library is found, attach 6 needed probes.

    Where to start

    I strongly encourage seeing the solution work (below) first. This way we will make sure everything is fine (from the last 2 PRs as well).

    • First PR: https://github.com/pixie-io/pixie/pull/415
    • Second PR: https://github.com/pixie-io/pixie/pull/432

    This PR has only 2 altered files: the uprobe manager.

    Seeing the entire gRPC-c solution work

    With this code, the gRPC-c library data should be visible to Stirling. To see it work, I used 2 dockers (client and server) of the simple "route guide" gRPC example project from the gRPC-c github repository.

    1. Download the tar file and open it. It contains a folder grpc-python.
    2. Run the build-dockers.sh script.
    3. Run the server: docker run -it -p 50051:50051 python-grpc-poc-server-v1-19. This is the server, which Stirling will attach probes to.
    4. Run Stirling in the px-dev-docker (this can also be done before all of the above, doesn't really matter). You can also run Stirling in any other environment where the server is visible to it, for me the px-dev-docker was the simplest solution.
    5. Run the client: docker run -it --network host python-grpc-poc-client. Stirling will not attach to the client, because the client's version is unfamiliar to Stirling (you'll also see the Stirling log that says the MD5 of the gRPC-c library is unknown).
    6. Now traffic between client and server should start, and Stirling outputs it (given the http_events table is enabled).

    Tell me what you want to do with this tar example, perhaps we should add it to the project's scripts directory.

    Example stdout: Screen Shot 2022-08-01 at 14 19 48

    opened by orishuss 8
  • px/cluster:

    px/cluster: "Namespaces" table only shows a subset of actual namespaces due to PxL script bug

    Describe the bug The namespaces table in px/cluster (and possibly other views) does an inner join with a services table and a HTTP requests table. For namespaces that don't have any services, or namespaces that haven't received any HTTP requests in the time window, they will be omitted from this view.

    We should change these joins to left/right so that there are still results for these types of namespaces.

    Also, we should audit existing scripts to ensure we don't use inner join in the wrong places elsewhere.

    opened by nserrino 0
  • [Self-hosted Pixie on non-minikube cluster] Login error

    [Self-hosted Pixie on non-minikube cluster] Login error "server closed the stream without sending trailers"

    Describe the bug I've deployed pixie based on your install documentation to our self-hosted kubernetes . it is about real cluster and not minikube. Because of our internal policy we don't use loadbalancer service. Therefore I replaced it with local service and using nginx ingress for exposing it to outside. The current state is you can see GUI and login with admin user. For deploying pixie, i need to login to server:

    px auth login
    

    But it throws the error (pls see screenshot). If i set the "local_mode=true" to login endpoint, it generates a token, but this token doesn't work and i get that error again:

    Pixie CLI
    *******************************
    * ENV VARS
    * 	 PL_CLOUD_ADDR=plc.e2e.example.net
    *******************************
    Starting browser... (if browser-based login fails, try running `px auth login --manual` for headless login)
    Fetching refresh token ...
    Failed to perform browser based auth. Will try manual auth error=rpc error: code = Internal desc = server closed the stream without sending trailers
    
    Please Visit:
     	 https://work.plc.e2e.example.net:443/login?local_mode=true
    
    Copy and paste token here:
    FATA[0059] Failed to login                               error="rpc error: code = Internal desc = server closed the stream without sending trailers"
    
    FAILED to authenticate with Pixie cloud.
    
    

    For other attempt i pass the API key which i created with admin user via GUI, but it doesn't work as well:

    px auth login --api_key <key>
    Pixie CLI
    *******************************
    * ENV VARS
    * 	 PL_CLOUD_ADDR=plc.e2e.example.net
    *******************************
    FATA[0008] Failed to login                               error="rpc error: code = Internal desc = server closed the stream without sending trailers"
    

    Could you please give some hints to resolve this issue?

    Screenshots image

    App information (please complete the following information):

    • Pixie version - v0.7.17
    • K8s cluster version - 1.23
    kind/support priority/awaiting-more-evidence area/deployment triage/needs-information 
    opened by amirkkn 3
  • Make it easier to zoom in and out of service maps

    Make it easier to zoom in and out of service maps

    Is your feature request related to a problem? Please describe. Currently, it's a bit difficult to zoom to the right resolution in service maps. The sensitivity is a bit high, and in addition, the reliance on the scroll wheel alone is a bit difficult for users to discover.

    Describe the solution you'd like We should consider the following ideas:

    1. Reducing the sensitivity of the scroll wheel, OR preferably
    2. Adding +/- buttons to the Service Graph & Graph widgets that only show up when the mouse hovers over the widget.

    Describe alternatives you've considered Both options 1&2 could work

    kind/feature priority/important-soon triage/accepted area/ui 
    opened by nserrino 0
  • Increased CPU utilization in traced Go applications

    Increased CPU utilization in traced Go applications

    Describe the bug The CPU utilization of Golang applications increases after Pixie is deployed. The CPU overhead can be as high as 2x.

    To Reproduce Deploy Pixie on a cluster with a Go application that uses TLS.

    Expected behavior The CPU usage of the Go application (as reported by top) should not increase significantly.

    Screenshots N/A

    Logs N/A

    App information (please complete the following information): Pixie Vizier 0.12.2

    Additional context N/A

    kind/bug priority/critical-urgent area/datacollector 
    opened by oazizi000 1
  • Support forward proxy configurations for Pixie Vizier

    Support forward proxy configurations for Pixie Vizier

    Is your feature request related to a problem? Please describe. Pixie Users might want to use a Forwarding Proxy for communications from Vizier to Cloud. This would help users who have firewalls and want only a single forward proxy to connect to Pixie Cloud.

    Describe the solution you'd like Golang already supports proxy configuration for the inbuilt HTTP clients. Setting the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables should be sufficient to route the HTTP traffic through the proxy. Pixie should provide a way that users can specify the values for these proxy environment variables and appropriately apply them to the various Pixie services so that traffic is sent via the forwarding proxy.

    Describe alternatives you've considered Using the patches feature in Pixie deployments doesn't work since patches cannot be used to add extra env vars to existing containers. Manually editing the k8s objects does work since autoupdate causes these edits to get dropped.

    kind/feature priority/backlog triage/accepted area/vizier 
    opened by vihangm 0
  • Ability to add tolerations to PEMs

    Ability to add tolerations to PEMs

    Is your feature request related to a problem? Please describe. Users would like to easily add tolerations in order to deploy PEMs to nodes that have taints. This can be done by manually updating the resources to include the tolerations, however these changes aren't sticky after auto-updates. You can currently use patches to add the toleration or nodeSelector to various resources, but this is prone to syntax errors. For example, setting a nodeSelector to the PEMs using helm:

    patches.vizier-pem='\\{\"spec\"\: {\"template\"\: {\"spec\"\: { \"nodeSelector\"\: {\"pixie\"\: \"allow\" }}}}}' \
    

    Describe the solution you'd like Users should be able to easily specify tolerations for their PEMs through the CLI and Helm Chart. This should look similarly to how tolerations are specified on K8s resources.

    kind/feature area/deployment priority/backlog triage/accepted 
    opened by aimichelle 0
Releases(release/cli/v0.7.17)
Owner
Pixie Labs
Engineers use Pixie’s auto-telemetry to debug distributed environments in real-time
Pixie Labs
The libxo library allows an application to generate text, XML, JSON, and HTML output using a common set of function calls. The application decides at run time which output style should be produced.

libxo libxo - A Library for Generating Text, XML, JSON, and HTML Output The libxo library allows an application to generate text, XML, JSON, and HTML

Juniper Networks 244 Sep 20, 2022
The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design. This project aims to help C++ developers connect to and interact with services.

The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design. This project aims to help C++ developers connect to and interact with services.

Microsoft 7.1k Oct 5, 2022
Example library and blog that explain how JSI modules are built from scratch in React Native

react-native-simple-jsi This is an example library that explains how anyone can build jsi modules from scratch in React Native. This code is written a

Ammar Ahmed 122 Sep 20, 2022
C Application Framework

Caffeine, C Application Framework Caffeine is a C language based framework which uses C99, POSIX and SUSv3 standards, and system specific system calls

Daniel Molina Wegener 102 Aug 10, 2022
Cheap: customized heaps for improved application performance.

Cheap: a malloc/new optimizer by Emery Berger About Cheap Cheap is a system that makes it easy to improve the performance of memory-intensive C/

Emery Berger 21 Aug 22, 2022
Concept of Dynamic Application

Concept of Dynamic Application This is a basic concept of dynamic software that supports plug-in feature. More information coming soon... Dynamic-Appl

Kambiz Asadzadeh 10 Jul 27, 2022
A simple application that generates animated BTTV emotes from static images

emoteJAM WARNING! The application is in active development and can't do anything yet. A simple application that generates animated BTTV emotes from st

Tsoding 7 Apr 27, 2021
A pytorch implementation of instant-ngp, as described in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding.

torch-ngp A pytorch implementation of instant-ngp, as described in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. Note: This

hawkey 778 Oct 2, 2022
Grafana - The open-source platform for monitoring and observability

The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

Grafana Labs 51.3k Oct 3, 2022
Machine Learning Platform for Kubernetes

Reproduce, Automate, Scale your data science. Welcome to Polyaxon, a platform for building, training, and monitoring large scale deep learning applica

polyaxon 3.2k Sep 30, 2022
Bitweb is an experimental digital currency that enables instant payments to anyone, anywhere in the world.

Bitweb is an experimental digital currency that enables instant payments to anyone, anywhere in the world.

Bitweb Project 12 Jul 1, 2022
C++ Workflow with kubernetes automated deployment.

workflow-k8s 本项目旨在将Workflow的服务治理与kubernetes的自动部署相融合,打造稳定、便捷的服务体系。 Kubernetes API Server提供了HTTP(S)接口,当集群内Pod发生变动后,会及时将这些事件推送给监听者,本项目依托Workflow的服务治理体系,使

Sogou Open Source 25 Aug 5, 2022
Makes sure Shadowplay's Instant Replay feature is on.

AlwaysShadow Shadowplay's Instant Replay feature is unreliable. You often find out it is turned off when you need it most. This is despite the fact th

Aviv Edery 43 Sep 29, 2022
Instant compile time C++ 11 metaprogramming library

Brigand Meta-programming library Introduction Brigand is a light-weight, fully functional, instant-compile time C++ 11 meta-programming library. Every

Edouard A. 551 Sep 22, 2022
Avian is an experimental digital currency that enables instant payments to anyone, anywhere in the world.

Avian Network [AVN] What is Avian? Avian is an experimental digital currency that enables instant payments to anyone, anywhere in the world. Avian use

null 45 Sep 12, 2022
Parca-agent - eBPF based always-on profiler auto-discovering targets in Kubernetes and systemd, zero code changes or restarts needed!

Parca Agent Parca Agent is an always-on sampling profiler that uses eBPF to capture raw profiling data with very low overhead. It observes user-space

Parca 202 Oct 2, 2022
Dedicated Game Server Hosting and Scaling for Multiplayer Games on Kubernetes

Agones is a library for hosting, running and scaling dedicated game servers on Kubernetes. Agones, is derived from the Greek word agōn which roughly t

GoogleForGames 4.8k Sep 28, 2022
Selective user space swap (kubernetes swap / kubeswap)

BigMaac ?? ?? ( Big Malloc Access And Calloc ) because sometimes a happy meal is not big enough BigMaac can be used in userspace (e.g. inside Kubernet

Misko 8 Jul 12, 2022
FFmpeg Kit for applications. Supports Android, Flutter, iOS, macOS, React Native and tvOS. Supersedes MobileFFmpeg, flutter_ffmpeg and react-native-ffmpeg.

FFmpeg Kit for applications. Supports Android, Flutter, iOS, macOS, React Native and tvOS. Supersedes MobileFFmpeg, flutter_ffmpeg and react-native-ffmpeg.

Taner Şener 5 Sep 29, 2022