DBT project 1

1. 初始化dbt项目,将所需的表加载到staging里

1.安装项目所需要dbt包

  1. 安装所需要的包,在dbt项目的根目录,创建`packages.yml
packages:
  - package: dbt-labs/dbt_utils
    version: 1.1.1
  - package: dbt-labs/codegen
    version: 0.12.1
  - package: calogica/dbt_expectations
    version: 0.10.3
  • 安装包
dbt deps

2.使用codegen给staging层的表,添加source

2.在staging层,创建tpch_sources.yml用来配置staging层需要用到的表
1.使用codegen生成,在macros里创建一个generate_sources文件夹,专门用来使用codegen生成项目需要的yml

image.png

2.根据snowflake里的表参数,编写macros
image.png

  • ’generate_tchp_sources.sql`
{{ codegen.generate_source(
    name = 'tpch',    # 引用时候用的名字
    schema_name = 'TPCH_SF1',  #实际数据库里的schema
    database_name = 'SNOWFLAKE_SAMPLE_DATA',  #数据库名
    table_names = ['ORDERS','LINEITEM'],   #表名
    include_descriptions  = True,   #生成description 
    generate_columns  =True  #生成列名
) }}

3.使用codegen生成base model

  • 生成好了sources之后,我们需要将基础的表加载到snowflake的view里,这就是base model


    image.png
  • 创建生成base mode的 macros , generate_tcph_orders.sql
{{ codegen.generate_base_model(
    source_name='tpch',
    table_name='lineitem',
    materialized='view'
) }}

4.使用aiflow运行Dags

1.在airflow的dags里创建一个新的文件夹/dags/tcph_schedule


image.png
  1. 创建run_tpch_models.py
from airflow import DAG
from airflow.operators.bash import BashOperator
from datetime import datetime
from dbtOp.dbt_operator import DbtOperator

_default_args = {
    'max_active_runs': 1,
    'catchup': False,
}


with DAG(
    dag_id = 'tpch_data_transformation',
    default_args= _default_args,
    start_date = datetime(2024,4,12,1),
    schedule_interval = None
) as dag:
    tpch_staging = DbtOperator(
        task_id='run_tpch_staging_model',
        dbt_command='run --models staging.tpch.*'
    )
    
tpch_staging

注意:run --models staging.tpch.*意思是:run staging/tpch文件夹下的所有dags

  1. 此时airflow的web界面是没有我们新建的dag的,需要配置config,将我们的文件夹添加进去


    image.png
#
dags_folder = /opt/airflow/dags
dags_folder = /opt/airflow/dags/tpch_schedule

4.运行airflow的


image.png
  • snowflake的view里有了我们刚才的staing
    image.png

    注意:由于我们在root/.dbt/profiles.yml中填写的默认数据库是airbnb,schema是raw,所以该数据view是在airbnb里

2. 在dbt_project.yml里配置mateiralize

  1. 指定Models的materialize和data_warehouse
models:
  my_dbt:
    # Config indicated by + and applies to all files under models/example/
    staging:
      materialized: view
      snowflake_warehouse: COMPUTE_WH
    marts:
      materialized: table
      snowflake_warehouse: COMPUTE_WH

3.创建Dim表和Fact表

1.在marts里分别创建int_order_items.sql表和fct_orders.sql

image.png

  • int_order_items.sql
with orders as (
    
    select * from {{ ref('stg_tpch_orders') }}

),

line_item as (

    select * from {{ ref('stg_tpch_line_items') }}

)
select 

    line_item.order_item_key,
    orders.order_key,
    orders.customer_key,
    orders.order_date,
    orders.status_code as order_status_code,
    line_item.part_key,
    line_item.supplier_key,
    line_item.return_flag,
    line_item.line_number,
    line_item.status_code as order_item_status_code,
    line_item.ship_date,
    line_item.commit_date,
    line_item.receipt_date,
    line_item.ship_mode,
    line_item.extended_price,
    line_item.quantity,
    
    -- extended_price is actually the line item total,
    -- so we back out the extended price per item
    (line_item.extended_price/nullif(line_item.quantity, 0))::decimal(16,2) as base_price,
    line_item.discount_percentage,
    (base_price * (1 - line_item.discount_percentage))::decimal(16,2) as discounted_price,

    line_item.extended_price as gross_item_sales_amount,
    (line_item.extended_price * (1 - line_item.discount_percentage))::decimal(16,2) as discounted_item_sales_amount,
    -- We model discounts as negative amounts
    (-1 * line_item.extended_price * line_item.discount_percentage)::decimal(16,2) as item_discount_amount,
    line_item.tax_rate,
    ((gross_item_sales_amount + item_discount_amount) * line_item.tax_rate)::decimal(16,2) as item_tax_amount,
    (
        gross_item_sales_amount + 
        item_discount_amount + 
        item_tax_amount
    )::decimal(16,2) as net_item_sales_amount

from
    orders
inner join line_item
        on orders.order_key = line_item.order_key
order by
    orders.order_date
  • fct_orders.sql
with orders as (
    
    select * from {{ ref('stg_tpch_orders') }} 

),
order_item as (
    
    select * from {{ ref('int_order_items') }}

),
order_item_summary as (

    select 
        order_key,
        sum(gross_item_sales_amount) as gross_item_sales_amount,
        sum(item_discount_amount) as item_discount_amount,
        sum(item_tax_amount) as item_tax_amount,
        sum(net_item_sales_amount) as net_item_sales_amount
    from order_item
    group by
        1
),
final as (

    select 

        orders.order_key, 
        orders.order_date,
        orders.customer_key,
        orders.status_code,
        orders.priority_code,
        orders.clerk_name,
        orders.ship_priority,
                
        1 as order_count,                
        order_item_summary.gross_item_sales_amount,
        order_item_summary.item_discount_amount,
        order_item_summary.item_tax_amount,
        order_item_summary.net_item_sales_amount
    from
        orders
        inner join order_item_summary
            on orders.order_key = order_item_summary.order_key
)
select 
    *
from
    final

order by
    order_date

4.创建Dags,自动运行上面的Models

from airflow import DAG
from airflow.operators.bash import BashOperator
from datetime import datetime
from dbtOp.dbt_operator import DbtOperator

_default_args = {
    'max_active_runs': 1,
    'catchup': False,
}

with DAG(
    dag_id = 'tpch_data_transformation',
    default_args= _default_args,
    start_date = datetime(2024,4,12,1),
    schedule_interval = None
) as dag:
    tpch_staging = DbtOperator(
        task_id='run_tpch_staging_model',
        dbt_command='run --models staging.tpch.*'
    )

    tpch_int_orders = DbtOperator(
        task_id='run_int_order_items',
        dbt_command='run --models int_order_items'
    )
    
    tpch_fact_orders =  DbtOperator(
        task_id='run_fact_orders',
        dbt_command='run --models fct_orders'
    )
    
tpch_staging >> tpch_int_orders >> tpch_fact_orders
  • 运行Dags成功


    image.png

5. 修改staging为 ephemeral

  • 上面staging一直作为view来存放,会占用系统资源,他只是一个加载原始表的过程,所以,我们使用ephemeral来优化
    image.png

    1.修改dbt_project.yml文件,将staging的materialize修改
# files using the `{{ config(...) }}` macro.
models:
  my_dbt:
    # Config indicated by + and applies to all files under models/example/
    staging:
      materialized: ephemeral
      snowflake_warehouse: COMPUTE_WH
    marts:
      materialized: table
      snowflake_warehouse: COMPUTE_WH

2.重新运行我们的Dags,运行成功后,snowflake里的stg_表 ,已经不再view里


image.png

6. 修改配置文件,将环境切换到product环境

由于,我们已经将本地root/.dbt里的配置文件profiles.yml 实时映射给了docker里的dbt的环境,所以修改该文件等于实时更改了docker里的dbt配置,将dev 更改为pro

my_dbt:
  outputs:
    dev:
      account: zxitxjd-aj37059
      database: airbnb
      password: shangxi123.SNOW
      role: transform
      schema: RAW
      threads: 5
      type: snowflake
      user: lg101
      warehouse: COMPUTE_WH
    project:
      account: zxitxjd-aj37059
      database: airbnb
      password: shangxi123.SNOW
      role: transform
      schema: TPCH
      threads: 5
      type: snowflake
      user: lg101
      warehouse: COMPUTE_WH
  target: project

7.给我们当前运行正常的代码添加general test和singular test

  1. 在marts/core文件夹下,添加一个core.yml文件,该文件可以使用codegen生成
{{ codegen.generate_model_yaml(
    model_names=['fct_orders','int_order_items']
) }}
  1. 修改core.yml文件,添加表字段的测试
models:
  - name: fct_orders
    description: ""
    columns:
      - name: order_key
        data_type: number
        description: ""
        tests:
          - unique
          - not_null
          - relationships:
              to: ref('stg_tpch_orders')
              field: order_key
              severity: warn
      - name: order_date
        data_type: date
        description: ""

3.在tests文件夹下,添加singular tests文件t_fct_orders_negative_discount.sql

select * from 
{{ ref('fct_orders') }}
where item_discount_amount > 0

8.一个完整的DBT项目流程

image.png
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容