Troubleshooting

This troubleshooting guide includes step-by-step instructions that should resolve most issues with logs.

v0.53.0: Docker: Schema Migrator: Dirty database version

If you are migrating from older version of SigNoz to V0.53.0, you might encounter issue with signoz-schema-migrator where it fails to run in Docker deployments.

clickhouse migrate failed to run, error: Dirty database version 14. Fix and force version.

Here are the steps on how to resolve it.

  • Check if these change is present, i.e macro is uncommented in the clickhouse-config.xml file.
 <macros>
    <shard>01</shard>
    <replica>example01-01-1</replica>
</macros>
  • Make sure clickhouse has restarted after the above change, you can restart it by running docker container restart signoz-clickhouse. If you try to run ./install.sh or docker compose up -d the migrator will fail again but it's fine, we will fix it in the next step.
  • Exec into the clickhouse container docker exec -it signoz-clickhouse /bin/bash
  • Run the client clickhouse client
  • Delete the migration entry by running this SQL command delete from signoz_logs.schema_migrations where version=14;
  • Now you can run ./install.sh or docker compose up -d, and the migrator will run successfully.

If you face any issue please reach out to us on slack.

Schema Migrator: Dirty database version

If you are migrating from older version of SigNoz to V0.49.1 or V0.50.0, you might encounter issue with signoz-schema-migrator where it fails to run and results in an error message like this.

Dirty database version 10. Fix and force version.

Similar error messages include

Dirty database version 11. Fix and force version.
Dirty database version 12. Fix and force version.

To fix this issue, first disable the schema migrator.

  • On k8s you can delete the job.
  • On Docker you can delete the schema migrator container.

Run The Migrations Manually

Verify The Changes

  • Check the schema of logs table by running
    show create table signoz_logs.logs format Vertical;
    
    The following should be present in the above response
        `scope_name` String CODEC(ZSTD(1)),
        `scope_version` String CODEC(ZSTD(1)),
        `scope_string_key` Array(String) CODEC(ZSTD(1)),
        `scope_string_value` Array(String) CODEC(ZSTD(1)),
        ...
        INDEX body_idx lower(body) TYPE ngrambf_v1(4, 60000, 5, 0) GRANULARITY 1,
        ...
        INDEX scope_name_idx scope_name TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4,
        ...
    
  • Check the schema of distributed_logs table by running
    show create table signoz_logs.distributed_logs format Vertical;
    
    The following should be present in the above response
        `scope_name` String CODEC(ZSTD(1)),
        `scope_version` String CODEC(ZSTD(1)),
        `scope_string_key` Array(String) CODEC(ZSTD(1)),
        `scope_string_value` Array(String) CODEC(ZSTD(1)),
    
  • Check the schema of tag_attributes table
    SHOW CREATE TABLE signoz_logs.tag_attributes
    
    The following should be present in the above response
    `tagType` Enum8('tag' = 1, 'resource' = 2, 'scope' = 3) CODEC(ZSTD(1)),
    

Once verified run the schema migrator again

  • On k8s run helm upgrade.
  • On Docker rerun the docker compose file.

If you face any issue please reach out to us on slack.

Filter logs of same application/service from different host

If you have a application/service which is emitting similar kind of logs but there are multiple instances of it running on different hosts you can identify them by adding a resource attribute.

There are different ways to add a resource attribute.

  1. By passing it as an environment variable if you are using the opentelemtry SDK or auto-instrumentation
    OTEL_RESOURCE_ATTRIBUTES=service.name=<service_name>,hostname=<host_name>
    
    replace the value of <service_name> with the actual service name and <host_name> with the actual hostname
  2. If you have a otel collector running on each host you can add a processor to add the hostname.
    processors:
        attributes/add_hostname:
            actions:
                - key: hostname
                  value: <hostname>
                  action: upsert
    ...
    service:
        logs:
            processors: [attributes/add_hostname, batch]
    
    replace the value of <host_name> with the actual hostname

Missing Columns Issue

In case you are using signoz version 0.27 or newer and in the past you ran the migration 0.27 You may face missing columns issue if you are using distributed clickhouse with multiple shards.

To solve this issue, exec into all the shards and follow the steps below:

show create table signoz_logs.distributed_logs

If in one shard the name of a column is host_name and in other shard the name is attribute_string_host_name then run the following command

alter table signoz_logs.distributed_logs on cluster cluster rename column if exists host_name to attribute_string_host_name

Run the above command for all the column names which were not migrated.

K8s Attribute Filtering Issue in Logs

In the SigNoz charts releases v0.9.1, v0.10.0 and 0.10.1, some users who are facing issues querying the following selected fields.

  • k8s_container_name
  • k8s_namespace_name
  • k8s_pod_name
  • k8s_container_restart_count
  • k8s_pod_uid

If you have included any of the above to selected fields, and you get empty data when you filter using those fields then you will have to perform the following steps.

  • Exec into your clickhouse container

    kubectl exec -n platform -it chi-my-release-clickhouse-cluster-0-0-0 -- sh
    
    clickhouse client
    
  • Run the following queries

    use signoz_logs;
    
    show create table logs;
    
  • For the corresponding field, you will find a materialized column and an index. For example: k8s_namespace_name you will have k8s_namespace_name String MATERIALIZED attributes_string_value[indexOf(attributes_string_key, 'k8s_namespace_name')] CODEC(LZ4) and index INDEX k8s_namespace_name_idx k8s_namespace_name TYPE bloom_filter(0.01) GRANULARITY 64

  • You will have to delete the index and remove the materialized column

    alter table logs drop column k8s_namespace_name;
    alter table logs drop index k8s_namespace_name_idx;
    
  • Perform the above steps for all the k8s fields listed.

  • Once done truncate the attribute keys table

    truncate table logs_atrribute_keys;
    
  • Now you can go back to the UI and convert them back to the selected field, and filters will work as expected.

Was this page helpful?