0

Avro schema:

{
  "name": "Entity",
  "type": "record",
  "namespace": "com.foobar.entity",
  "fields": [
    {
      "name": "attribute",
      "type": "string"
    },
    {
      "name": "value",
      "type": "int"
    },
    {
      "name": "timestamp",
      "type": { "type": "long", "logicalType": "timestamp-micros" }
    }
  ]
}

The source timestamp is in Unix format with milli second precision.

When I put such records into BigQuery I get values like 1970-01-19 01:18:19.415 UTC in the BigQuery data preview. However the value I stored is 1559899418 which is Friday, 7. June 2019 09:23:38. Any ideas why?

Reference: https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-avro#logical_types

1

Your timestamp is off by a factor 1000. Indeed, 1559899418 corresponds to Friday, 7. June 2019 09:23:38, but that's second-precision (Unix timestamp), not millisecond. And 1559899 (one thousandth of 1559899418) does indeed correspond to 1970-01-19 01:18:19

  • 1
    🙉 thanks for pointing this out. So in the end I did timestamp*1000*1000 and now BigQuery shows the correct date. – Artjom Zabelin Jun 7 at 11:50

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

Not the answer you're looking for? Browse other questions tagged or ask your own question.