最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

Apache Flink SQL API Temporary Table connector type - Stack Overflow

programmeradmin0浏览0评论

I am experimenting little bit with Flink SQL API, basically what I am trying to do is working but I am stuck at one point.

I am reading some kafka topics and from the data I received I have to do some aggregations, for development purposes, I want to do the Aggregation in Memory and observe the results with Print Connector.

So for the Aggregation I created a Temporary Table but I don't want to connect this at the moment with a kafka topic or a jdbc connection, until I am sure which direction I will go.

So my temporary table is something like the following,

CREATE TEMPORARY TABLE orders (
    'field1' INT,
    'field2' STRING,
    'field3' STRING
)

without using a connector for this table I can't make it work, I always get...

Table options do not contain an option key 'connector' for discovering a connector. Therefore, Flink assumes a managed table. However, a managed table factory that implements .apache.flink.table.factories.ManagedTableFactory is not in the classpath

I was hoping a connector to test my concept in memory, but I can't find any, the best I can find is 'blackhole'

WITH (
  'connector' = 'blackhole'
)

but the problem with that, then I can't use the print connector as sink.

What confuse me, I see some sample from Confluent,

Confluent Flink Table

which are creating TABLEs without connectors..

CREATE TABLE t2 (
  `id` BIGINT,
  `name` STRING,
  `age` INT,
  `salary` DECIMAL(10,2),
  `active` BOOLEAN,
  `created_at` TIMESTAMP_LTZ(3)
) WITH (
   'changelog.mode' = 'retract'
);

Does their implementation has implicit connectors?

发布评论

评论列表(0)

  1. 暂无评论