qairt-dlc-info

usage: qairt-dlc-info
       [-h]
       -i

       [-s]
       [-m]
       [-d]
       [-t]

Reads in a DLC file and outputs layer information to stdout.

required arguments:
  -i, --input_dlc
    Path to a DLC file.

optional arguments:
  -s, --save
    Save the output to a csv file. Specify a target file path.
  -m, --display_memory
    Show detailed information about memory usage.
  -d, --display_all_encodings
    Show detailed axis-quantization encoding information.
  -t, --dump_framework_trace
    Save framework trace info into the csv file that was passed to --save option.

DLC info of: /content/model-opset-17.dlc
Model Version: 2025-10-31
Model Copyright:Maintained by k2-fsa
""
"Converter command: qairt-converter; validate_models=False; user_custom_io=[]; use_onnx_relay=False; unroll_gru_time_steps=True; signature_name=; quantization_overrides=; pytorch_custom_op_lib=; prepare_inputs_as_params=False; perform_axes_to_spatial_first_order=True; unroll_lstm_time_steps=True; packed_max_seq=1; packed_masked_softmax_inputs=[]; output_layout=[]; show_unconsumed_nodes=False; out_names=[]; no_static_tensor_validation=False; no_simplification=False; no_optimization=False; multi_time_steps_gru=False; match_caffe_ssd_to_tf=False; lora_weight_list=None; preserve_onnx_output_order=False; perform_layout_transformation=False; dump_relay=None; keep_quant_nodes=False; keep_disconnected_nodes=False; input_dtype=[['x', 'float32'], ['prompt', 'int32']]; disable_preserve_io=False; handle_gather_negative_indices=True; converter_op_package_lib=; gguf_config=None; tf_summary=False; copyright_file=./copyright.txt; force_prune_cast_ops=False; use_convert_quantization_nodes=False; preserve_io_datatype=[]; package_name=None; dry_run=None; defer_loading=False; desired_io_layout=[]; input_type=[]; float_bitwidth=32; io_config=; model_version=2025-10-31; float_bias_bitwidth=0; extract_color_transform=True; expand_sparse_op_structure=False; expand_lstm_op_structure=False; input_layout=[]; saved_model_signature_key=serving_default; expand_gru_op_structure=True; enable_match_gathernd=False; enable_framework_trace=False; input_encoding=[]; dump_io_config_template=; onnx_summary=False; disable_match_lstms=False; soc_model=; keep_int64_inputs=False; disable_batchnorm_folding=False; preserve_io=[['layout']]; multi_time_steps_lstm=False; debug=-1; batch=None; custom_op_config_paths=None; preprocess_roi_pool_inputs=True; backend=; saved_model_tag=serve; dump_custom_io_config_template=; dumpIR=False; align_matmul_ranks=True; dump_value_info=False; squash_box_decoder=True; define_symbol=None; inject_cast_for_gather=True; dump_inferred_model=False; enable_tensor_deduplication=False; perform_sequence_construct_optimizer=False; adjust_nms_features_dims=True; partial_graph_input_name=None; disable_qnn_op_config_validation=False; apply_masked_softmax=uncompressed; input_dim=[['x', '1,93,560'], ['prompt', '1,4']]; dump_exported_onnx=False"
Quantizer command: N/A
DLC created with converter version: 2.32.6.250402152434_116405
Info of graph: model-opset-17
Id,Name,Type,Inputs,Outputs,Out Dims,Runtimes,Parameters
0,x.nfc,Transpose,"x (data type: Float_32; tensor dimension: [1,93,560]; tensor type: APP_WRITE) [NW Input]","x.nfc (data type: Float_32; tensor dimension: [1,560,93]; tensor type: NATIVE)",1x560x93,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1,/embed/Gather,Gather,"embed.weight (data type: Float_32; tensor dimension: [16,560]; tensor type: STATIC)","/embed/Gather_output_0 (data type: Float_32; tensor dimension: [1,4,560]; tensor type: NATIVE)",1x4x560,A D G C,axis: 0
,,,"prompt (data type: Int_32; tensor dimension: [1,4]; tensor type: APP_WRITE) [NW Input]",,,,packageName: qti.aisw
2,/embed/Gather_output_0.nfc,Transpose,"/embed/Gather_output_0 (data type: Float_32; tensor dimension: [1,4,560]; tensor type: NATIVE)","/embed/Gather_output_0.nfc (data type: Float_32; tensor dimension: [1,560,4]; tensor type: NATIVE)",1x560x4,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
3,/Add,Eltwise_Binary,"x.nfc (data type: Float_32; tensor dimension: [1,560,93]; tensor type: NATIVE)","/Add_output_0 (data type: Float_32; tensor dimension: [1,560,93]; tensor type: NATIVE)",1x560x93,A D G C,operation: 0
,,,"/Constant_output_0 (data type: Float_32; tensor dimension: [1,560,1]; tensor type: STATIC)",,,,packageName: qti.aisw
4,/Mul,Eltwise_Binary,"/Add_output_0 (data type: Float_32; tensor dimension: [1,560,93]; tensor type: NATIVE)","/Mul_output_0 (data type: Float_32; tensor dimension: [1,560,93]; tensor type: NATIVE)",1x560x93,A D G C,operation: 13
,,,"/Constant_1_output_0 (data type: Float_32; tensor dimension: [1,560,1]; tensor type: STATIC)",,,,packageName: qti.aisw
5,/Concat,Concat,"/embed/Gather_output_0.nfc (data type: Float_32; tensor dimension: [1,560,4]; tensor type: NATIVE)","/Concat_output_0 (data type: Float_32; tensor dimension: [1,560,97]; tensor type: NATIVE)",1x560x97,A D G C,axis: 2
,,,"/Mul_output_0 (data type: Float_32; tensor dimension: [1,560,93]; tensor type: NATIVE)",,,,packageName: qti.aisw
6,/encoder/Mul,Eltwise_Binary,"/Concat_output_0 (data type: Float_32; tensor dimension: [1,560,97]; tensor type: NATIVE)","/encoder/Mul_output_0 (data type: Float_32; tensor dimension: [1,560,97]; tensor type: NATIVE)",1x560x97,A D G C,operation: 13
,,,/encoder/Constant_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
7,/encoder/embed/Add_1,Eltwise_Binary,"/encoder/Mul_output_0 (data type: Float_32; tensor dimension: [1,560,97]; tensor type: NATIVE)","/encoder/embed/Add_1_output_0 (data type: Float_32; tensor dimension: [1,560,97]; tensor type: NATIVE)",1x560x97,A D G C,operation: 0
,,,"/encoder/embed/Concat_1_output_0 (data type: Float_32; tensor dimension: [1,560,97]; tensor type: STATIC)",,,,packageName: qti.aisw
8,/encoder/embed/Add_1_output_0.ncf,Transpose,"/encoder/embed/Add_1_output_0 (data type: Float_32; tensor dimension: [1,560,97]; tensor type: NATIVE)","/encoder/embed/Add_1_output_0.ncf (data type: Float_32; tensor dimension: [1,97,560]; tensor type: NATIVE)",1x97x560,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
9,/encoder/encoders0.0/norm1/LayerNormalization,LayerNorm,"/encoder/embed/Add_1_output_0.ncf (data type: Float_32; tensor dimension: [1,97,560]; tensor type: NATIVE)","/encoder/encoders0.0/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,560]; tensor type: NATIVE)",1x97x560,A D G C,axes: [2]
,,,onnx::LayerNormalization_9033 (data type: Float_32; tensor dimension: [560]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9034 (data type: Float_32; tensor dimension: [560]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 271k (0.102%)
10,/encoder/encoders0.0/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders0.0/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,560]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,560]; tensor type: NATIVE)",97x560,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 560]"
11,/encoder/encoders0.0/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders0.0/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,560]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders0.0.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9035 (data type: Float_32; tensor dimension: [1536,560]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders0.0.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 861k (0.372%)
,,,,,,,MACs per inference: 860k (0.323%)
12,/encoder/encoders0.0/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders0.0/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
13,/encoder/encoders0.0/self_attn/Split,Split,"/encoder/encoders0.0/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders0.0/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders0.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
14,/encoder/encoders0.0/self_attn/Reshape,Reshape,"/encoder/encoders0.0/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
15,/encoder/encoders0.0/self_attn/Transpose,Transpose,"/encoder/encoders0.0/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
16,/encoder/encoders0.0/self_attn/Reshape_1,Reshape,"/encoder/encoders0.0/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
17,/encoder/encoders0.0/self_attn/Reshape_2,Reshape,"/encoder/encoders0.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
18,/encoder/encoders0.0/self_attn/Transpose_1,Transpose,"/encoder/encoders0.0/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
19,/encoder/encoders0.0/self_attn/Transpose_2,Transpose,"/encoder/encoders0.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
20,/encoder/encoders0.0/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders0.0/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
21,/encoder/encoders0.0/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders0.0/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
22,/encoder/encoders0.0/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders0.0/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders0.0.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
23,/encoder/encoders0.0/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders0.0/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
24,/encoder/encoders0.0/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders0.0/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
25,/encoder/encoders0.0/self_attn/Transpose_3,Transpose,"/encoder/encoders0.0/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
26,/encoder/encoders0.0/self_attn/Add,Eltwise_Binary,"/encoder/encoders0.0/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders0.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
27,/encoder/encoders0.0/self_attn/Mul,Eltwise_Binary,"/encoder/encoders0.0/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
28,/encoder/encoders0.0/self_attn/Transpose_4,Transpose,"/encoder/encoders0.0/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
29,/encoder/encoders0.0/self_attn/MatMul,MatMul,"/encoder/encoders0.0/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders0.0/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
30,/encoder/encoders0.0/self_attn/Softmax,Softmax,"/encoder/encoders0.0/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
31,/encoder/encoders0.0/self_attn/MatMul_1,MatMul,"/encoder/encoders0.0/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders0.0/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
32,/encoder/encoders0.0/self_attn/Transpose_5,Transpose,"/encoder/encoders0.0/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
33,/encoder/encoders0.0/self_attn/Reshape_3,Reshape,"/encoder/encoders0.0/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
34,/encoder/encoders0.0/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders0.0/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders0.0.self_attn.linear_out.bias
,,,"onnx::MatMul_9049 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders0.0.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
35,/encoder/encoders0.0/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders0.0/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
36,/encoder/encoders0.0/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders0.0/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders0.0/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders0.0/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
37,/encoder/encoders.0/norm1/LayerNormalization,LayerNorm,"/encoder/encoders0.0/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9050 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9051 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
38,/encoder/encoders.0/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.0/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
39,/encoder/encoders.0/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.0/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.0.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9052 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.0.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
40,/encoder/encoders.0/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.0/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
41,/encoder/encoders.0/self_attn/Split,Split,"/encoder/encoders.0/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.0/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
42,/encoder/encoders.0/self_attn/Reshape,Reshape,"/encoder/encoders.0/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
43,/encoder/encoders.0/self_attn/Transpose,Transpose,"/encoder/encoders.0/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
44,/encoder/encoders.0/self_attn/Reshape_1,Reshape,"/encoder/encoders.0/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
45,/encoder/encoders.0/self_attn/Reshape_2,Reshape,"/encoder/encoders.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
46,/encoder/encoders.0/self_attn/Transpose_1,Transpose,"/encoder/encoders.0/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
47,/encoder/encoders.0/self_attn/Transpose_2,Transpose,"/encoder/encoders.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
48,/encoder/encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.0/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
49,/encoder/encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
50,/encoder/encoders.0/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.0.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
51,/encoder/encoders.0/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.0/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
52,/encoder/encoders.0/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.0/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
53,/encoder/encoders.0/self_attn/Transpose_3,Transpose,"/encoder/encoders.0/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
54,/encoder/encoders.0/self_attn/Add,Eltwise_Binary,"/encoder/encoders.0/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
55,/encoder/encoders.0/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.0/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
56,/encoder/encoders.0/self_attn/Transpose_4,Transpose,"/encoder/encoders.0/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
57,/encoder/encoders.0/self_attn/MatMul,MatMul,"/encoder/encoders.0/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.0/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
58,/encoder/encoders.0/self_attn/Softmax,Softmax,"/encoder/encoders.0/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
59,/encoder/encoders.0/self_attn/MatMul_1,MatMul,"/encoder/encoders.0/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.0/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
60,/encoder/encoders.0/self_attn/Transpose_5,Transpose,"/encoder/encoders.0/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
61,/encoder/encoders.0/self_attn/Reshape_3,Reshape,"/encoder/encoders.0/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
62,/encoder/encoders.0/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.0/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.0.self_attn.linear_out.bias
,,,"onnx::MatMul_9066 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.0.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
63,/encoder/encoders.0/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.0/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
64,/encoder/encoders.0/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.0/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.0/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
65,/encoder/encoders.0/Add,Eltwise_Binary,"/encoder/encoders0.0/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.0/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
66,/encoder/encoders.0/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.0/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9067 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9068 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
67,/encoder/encoders.0/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.0/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
68,/encoder/encoders.0/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.0/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.0/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.0.feed_forward.w_1.bias
,,,"onnx::MatMul_9069 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.0.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
69,/encoder/encoders.0/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.0/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.0/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
70,/encoder/encoders.0/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.0/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.0/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
71,/encoder/encoders.0/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.0/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.0/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
72,/encoder/encoders.0/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.0/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.0/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.0.feed_forward.w_2.bias
,,,"onnx::MatMul_9070 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.0.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
73,/encoder/encoders.0/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.0/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.0/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
74,/encoder/encoders.0/Add_1,Eltwise_Binary,"/encoder/encoders.0/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.0/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.0/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
75,/encoder/encoders.1/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.0/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9071 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9072 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
76,/encoder/encoders.1/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.1/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
77,/encoder/encoders.1/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.1/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.1.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9073 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.1.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
78,/encoder/encoders.1/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.1/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
79,/encoder/encoders.1/self_attn/Split,Split,"/encoder/encoders.1/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.1/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.1/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
80,/encoder/encoders.1/self_attn/Reshape,Reshape,"/encoder/encoders.1/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
81,/encoder/encoders.1/self_attn/Transpose,Transpose,"/encoder/encoders.1/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
82,/encoder/encoders.1/self_attn/Reshape_1,Reshape,"/encoder/encoders.1/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
83,/encoder/encoders.1/self_attn/Reshape_2,Reshape,"/encoder/encoders.1/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
84,/encoder/encoders.1/self_attn/Transpose_1,Transpose,"/encoder/encoders.1/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
85,/encoder/encoders.1/self_attn/Transpose_2,Transpose,"/encoder/encoders.1/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
86,/encoder/encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.1/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
87,/encoder/encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
88,/encoder/encoders.1/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.1.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
89,/encoder/encoders.1/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.1/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
90,/encoder/encoders.1/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.1/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
91,/encoder/encoders.1/self_attn/Transpose_3,Transpose,"/encoder/encoders.1/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
92,/encoder/encoders.1/self_attn/Add,Eltwise_Binary,"/encoder/encoders.1/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.1/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
93,/encoder/encoders.1/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.1/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
94,/encoder/encoders.1/self_attn/Transpose_4,Transpose,"/encoder/encoders.1/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
95,/encoder/encoders.1/self_attn/MatMul,MatMul,"/encoder/encoders.1/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.1/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
96,/encoder/encoders.1/self_attn/Softmax,Softmax,"/encoder/encoders.1/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
97,/encoder/encoders.1/self_attn/MatMul_1,MatMul,"/encoder/encoders.1/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.1/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
98,/encoder/encoders.1/self_attn/Transpose_5,Transpose,"/encoder/encoders.1/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
99,/encoder/encoders.1/self_attn/Reshape_3,Reshape,"/encoder/encoders.1/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
100,/encoder/encoders.1/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.1/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.1.self_attn.linear_out.bias
,,,"onnx::MatMul_9087 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.1.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
101,/encoder/encoders.1/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.1/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
102,/encoder/encoders.1/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.1/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.1/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
103,/encoder/encoders.1/Add,Eltwise_Binary,"/encoder/encoders.0/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.1/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
104,/encoder/encoders.1/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9088 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9089 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
105,/encoder/encoders.1/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.1/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
106,/encoder/encoders.1/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.1/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.1/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.1.feed_forward.w_1.bias
,,,"onnx::MatMul_9090 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.1.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
107,/encoder/encoders.1/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.1/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.1/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
108,/encoder/encoders.1/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.1/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.1/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
109,/encoder/encoders.1/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.1/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.1/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
110,/encoder/encoders.1/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.1/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.1/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.1.feed_forward.w_2.bias
,,,"onnx::MatMul_9091 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.1.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
111,/encoder/encoders.1/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.1/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.1/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
112,/encoder/encoders.1/Add_1,Eltwise_Binary,"/encoder/encoders.1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.1/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.1/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
113,/encoder/encoders.2/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.1/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9092 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9093 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
114,/encoder/encoders.2/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.2/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
115,/encoder/encoders.2/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.2/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.2.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9094 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.2.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
116,/encoder/encoders.2/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.2/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
117,/encoder/encoders.2/self_attn/Split,Split,"/encoder/encoders.2/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.2/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.2/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
118,/encoder/encoders.2/self_attn/Reshape,Reshape,"/encoder/encoders.2/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
119,/encoder/encoders.2/self_attn/Transpose,Transpose,"/encoder/encoders.2/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
120,/encoder/encoders.2/self_attn/Reshape_1,Reshape,"/encoder/encoders.2/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
121,/encoder/encoders.2/self_attn/Reshape_2,Reshape,"/encoder/encoders.2/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
122,/encoder/encoders.2/self_attn/Transpose_1,Transpose,"/encoder/encoders.2/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
123,/encoder/encoders.2/self_attn/Transpose_2,Transpose,"/encoder/encoders.2/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
124,/encoder/encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.2/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
125,/encoder/encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
126,/encoder/encoders.2/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.2.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
127,/encoder/encoders.2/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.2/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
128,/encoder/encoders.2/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.2/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
129,/encoder/encoders.2/self_attn/Transpose_3,Transpose,"/encoder/encoders.2/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
130,/encoder/encoders.2/self_attn/Add,Eltwise_Binary,"/encoder/encoders.2/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.2/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
131,/encoder/encoders.2/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.2/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
132,/encoder/encoders.2/self_attn/Transpose_4,Transpose,"/encoder/encoders.2/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
133,/encoder/encoders.2/self_attn/MatMul,MatMul,"/encoder/encoders.2/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.2/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
134,/encoder/encoders.2/self_attn/Softmax,Softmax,"/encoder/encoders.2/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
135,/encoder/encoders.2/self_attn/MatMul_1,MatMul,"/encoder/encoders.2/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.2/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
136,/encoder/encoders.2/self_attn/Transpose_5,Transpose,"/encoder/encoders.2/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
137,/encoder/encoders.2/self_attn/Reshape_3,Reshape,"/encoder/encoders.2/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
138,/encoder/encoders.2/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.2/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.2.self_attn.linear_out.bias
,,,"onnx::MatMul_9108 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.2.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
139,/encoder/encoders.2/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.2/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
140,/encoder/encoders.2/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.2/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.2/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
141,/encoder/encoders.2/Add,Eltwise_Binary,"/encoder/encoders.1/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.2/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
142,/encoder/encoders.2/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9109 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9110 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
143,/encoder/encoders.2/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.2/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
144,/encoder/encoders.2/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.2/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.2/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.2.feed_forward.w_1.bias
,,,"onnx::MatMul_9111 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.2.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
145,/encoder/encoders.2/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.2/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.2/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
146,/encoder/encoders.2/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.2/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.2/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
147,/encoder/encoders.2/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.2/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.2/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
148,/encoder/encoders.2/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.2/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.2/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.2.feed_forward.w_2.bias
,,,"onnx::MatMul_9112 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.2.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
149,/encoder/encoders.2/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.2/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.2/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
150,/encoder/encoders.2/Add_1,Eltwise_Binary,"/encoder/encoders.2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.2/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.2/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
151,/encoder/encoders.3/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.2/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9113 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9114 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
152,/encoder/encoders.3/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.3/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
153,/encoder/encoders.3/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.3/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.3.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9115 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.3.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
154,/encoder/encoders.3/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.3/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
155,/encoder/encoders.3/self_attn/Split,Split,"/encoder/encoders.3/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.3/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.3/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
156,/encoder/encoders.3/self_attn/Reshape,Reshape,"/encoder/encoders.3/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
157,/encoder/encoders.3/self_attn/Transpose,Transpose,"/encoder/encoders.3/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
158,/encoder/encoders.3/self_attn/Reshape_1,Reshape,"/encoder/encoders.3/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
159,/encoder/encoders.3/self_attn/Reshape_2,Reshape,"/encoder/encoders.3/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
160,/encoder/encoders.3/self_attn/Transpose_1,Transpose,"/encoder/encoders.3/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
161,/encoder/encoders.3/self_attn/Transpose_2,Transpose,"/encoder/encoders.3/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
162,/encoder/encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.3/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
163,/encoder/encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
164,/encoder/encoders.3/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.3.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
165,/encoder/encoders.3/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.3/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
166,/encoder/encoders.3/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.3/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
167,/encoder/encoders.3/self_attn/Transpose_3,Transpose,"/encoder/encoders.3/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
168,/encoder/encoders.3/self_attn/Add,Eltwise_Binary,"/encoder/encoders.3/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.3/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
169,/encoder/encoders.3/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.3/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
170,/encoder/encoders.3/self_attn/Transpose_4,Transpose,"/encoder/encoders.3/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
171,/encoder/encoders.3/self_attn/MatMul,MatMul,"/encoder/encoders.3/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.3/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
172,/encoder/encoders.3/self_attn/Softmax,Softmax,"/encoder/encoders.3/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
173,/encoder/encoders.3/self_attn/MatMul_1,MatMul,"/encoder/encoders.3/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.3/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
174,/encoder/encoders.3/self_attn/Transpose_5,Transpose,"/encoder/encoders.3/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
175,/encoder/encoders.3/self_attn/Reshape_3,Reshape,"/encoder/encoders.3/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
176,/encoder/encoders.3/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.3/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.3.self_attn.linear_out.bias
,,,"onnx::MatMul_9129 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.3.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
177,/encoder/encoders.3/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.3/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
178,/encoder/encoders.3/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.3/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.3/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
179,/encoder/encoders.3/Add,Eltwise_Binary,"/encoder/encoders.2/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.3/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
180,/encoder/encoders.3/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.3/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9130 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9131 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
181,/encoder/encoders.3/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.3/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
182,/encoder/encoders.3/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.3/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.3/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.3.feed_forward.w_1.bias
,,,"onnx::MatMul_9132 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.3.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
183,/encoder/encoders.3/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.3/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.3/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
184,/encoder/encoders.3/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.3/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.3/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
185,/encoder/encoders.3/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.3/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.3/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
186,/encoder/encoders.3/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.3/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.3/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.3.feed_forward.w_2.bias
,,,"onnx::MatMul_9133 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.3.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
187,/encoder/encoders.3/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.3/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.3/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
188,/encoder/encoders.3/Add_1,Eltwise_Binary,"/encoder/encoders.3/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.3/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.3/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
189,/encoder/encoders.4/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.3/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9134 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9135 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
190,/encoder/encoders.4/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.4/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
191,/encoder/encoders.4/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.4/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.4.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9136 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.4.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
192,/encoder/encoders.4/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.4/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
193,/encoder/encoders.4/self_attn/Split,Split,"/encoder/encoders.4/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.4/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.4/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
194,/encoder/encoders.4/self_attn/Reshape,Reshape,"/encoder/encoders.4/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
195,/encoder/encoders.4/self_attn/Transpose,Transpose,"/encoder/encoders.4/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
196,/encoder/encoders.4/self_attn/Reshape_1,Reshape,"/encoder/encoders.4/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
197,/encoder/encoders.4/self_attn/Reshape_2,Reshape,"/encoder/encoders.4/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
198,/encoder/encoders.4/self_attn/Transpose_1,Transpose,"/encoder/encoders.4/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
199,/encoder/encoders.4/self_attn/Transpose_2,Transpose,"/encoder/encoders.4/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
200,/encoder/encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.4/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
201,/encoder/encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
202,/encoder/encoders.4/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.4.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
203,/encoder/encoders.4/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.4/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
204,/encoder/encoders.4/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.4/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
205,/encoder/encoders.4/self_attn/Transpose_3,Transpose,"/encoder/encoders.4/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
206,/encoder/encoders.4/self_attn/Add,Eltwise_Binary,"/encoder/encoders.4/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.4/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
207,/encoder/encoders.4/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.4/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
208,/encoder/encoders.4/self_attn/Transpose_4,Transpose,"/encoder/encoders.4/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
209,/encoder/encoders.4/self_attn/MatMul,MatMul,"/encoder/encoders.4/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.4/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
210,/encoder/encoders.4/self_attn/Softmax,Softmax,"/encoder/encoders.4/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
211,/encoder/encoders.4/self_attn/MatMul_1,MatMul,"/encoder/encoders.4/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.4/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
212,/encoder/encoders.4/self_attn/Transpose_5,Transpose,"/encoder/encoders.4/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
213,/encoder/encoders.4/self_attn/Reshape_3,Reshape,"/encoder/encoders.4/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
214,/encoder/encoders.4/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.4/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.4.self_attn.linear_out.bias
,,,"onnx::MatMul_9150 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.4.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
215,/encoder/encoders.4/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.4/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
216,/encoder/encoders.4/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.4/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.4/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
217,/encoder/encoders.4/Add,Eltwise_Binary,"/encoder/encoders.3/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.4/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
218,/encoder/encoders.4/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.4/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9151 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9152 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
219,/encoder/encoders.4/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.4/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
220,/encoder/encoders.4/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.4/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.4/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.4.feed_forward.w_1.bias
,,,"onnx::MatMul_9153 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.4.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
221,/encoder/encoders.4/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.4/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.4/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
222,/encoder/encoders.4/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.4/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.4/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
223,/encoder/encoders.4/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.4/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.4/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
224,/encoder/encoders.4/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.4/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.4/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.4.feed_forward.w_2.bias
,,,"onnx::MatMul_9154 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.4.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
225,/encoder/encoders.4/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.4/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.4/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
226,/encoder/encoders.4/Add_1,Eltwise_Binary,"/encoder/encoders.4/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.4/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.4/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
227,/encoder/encoders.5/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.4/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9155 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9156 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
228,/encoder/encoders.5/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.5/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
229,/encoder/encoders.5/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.5/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.5.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9157 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.5.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
230,/encoder/encoders.5/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.5/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
231,/encoder/encoders.5/self_attn/Split,Split,"/encoder/encoders.5/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.5/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.5/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
232,/encoder/encoders.5/self_attn/Reshape,Reshape,"/encoder/encoders.5/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
233,/encoder/encoders.5/self_attn/Transpose,Transpose,"/encoder/encoders.5/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
234,/encoder/encoders.5/self_attn/Reshape_1,Reshape,"/encoder/encoders.5/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
235,/encoder/encoders.5/self_attn/Reshape_2,Reshape,"/encoder/encoders.5/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
236,/encoder/encoders.5/self_attn/Transpose_1,Transpose,"/encoder/encoders.5/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
237,/encoder/encoders.5/self_attn/Transpose_2,Transpose,"/encoder/encoders.5/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
238,/encoder/encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.5/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
239,/encoder/encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
240,/encoder/encoders.5/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.5.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
241,/encoder/encoders.5/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.5/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
242,/encoder/encoders.5/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.5/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
243,/encoder/encoders.5/self_attn/Transpose_3,Transpose,"/encoder/encoders.5/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
244,/encoder/encoders.5/self_attn/Add,Eltwise_Binary,"/encoder/encoders.5/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.5/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
245,/encoder/encoders.5/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.5/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
246,/encoder/encoders.5/self_attn/Transpose_4,Transpose,"/encoder/encoders.5/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
247,/encoder/encoders.5/self_attn/MatMul,MatMul,"/encoder/encoders.5/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.5/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
248,/encoder/encoders.5/self_attn/Softmax,Softmax,"/encoder/encoders.5/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
249,/encoder/encoders.5/self_attn/MatMul_1,MatMul,"/encoder/encoders.5/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.5/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
250,/encoder/encoders.5/self_attn/Transpose_5,Transpose,"/encoder/encoders.5/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
251,/encoder/encoders.5/self_attn/Reshape_3,Reshape,"/encoder/encoders.5/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
252,/encoder/encoders.5/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.5/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.5.self_attn.linear_out.bias
,,,"onnx::MatMul_9171 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.5.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
253,/encoder/encoders.5/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.5/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
254,/encoder/encoders.5/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.5/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.5/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
255,/encoder/encoders.5/Add,Eltwise_Binary,"/encoder/encoders.4/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.5/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
256,/encoder/encoders.5/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.5/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9172 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9173 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
257,/encoder/encoders.5/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.5/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
258,/encoder/encoders.5/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.5/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.5/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.5.feed_forward.w_1.bias
,,,"onnx::MatMul_9174 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.5.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
259,/encoder/encoders.5/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.5/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.5/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
260,/encoder/encoders.5/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.5/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.5/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
261,/encoder/encoders.5/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.5/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.5/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
262,/encoder/encoders.5/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.5/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.5/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.5.feed_forward.w_2.bias
,,,"onnx::MatMul_9175 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.5.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
263,/encoder/encoders.5/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.5/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.5/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
264,/encoder/encoders.5/Add_1,Eltwise_Binary,"/encoder/encoders.5/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.5/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.5/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
265,/encoder/encoders.6/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.5/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9176 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9177 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
266,/encoder/encoders.6/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.6/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
267,/encoder/encoders.6/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.6/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.6.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9178 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.6.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
268,/encoder/encoders.6/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.6/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
269,/encoder/encoders.6/self_attn/Split,Split,"/encoder/encoders.6/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.6/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.6/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
270,/encoder/encoders.6/self_attn/Reshape,Reshape,"/encoder/encoders.6/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
271,/encoder/encoders.6/self_attn/Transpose,Transpose,"/encoder/encoders.6/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
272,/encoder/encoders.6/self_attn/Reshape_1,Reshape,"/encoder/encoders.6/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
273,/encoder/encoders.6/self_attn/Reshape_2,Reshape,"/encoder/encoders.6/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
274,/encoder/encoders.6/self_attn/Transpose_1,Transpose,"/encoder/encoders.6/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
275,/encoder/encoders.6/self_attn/Transpose_2,Transpose,"/encoder/encoders.6/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
276,/encoder/encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.6/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
277,/encoder/encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
278,/encoder/encoders.6/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.6.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
279,/encoder/encoders.6/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.6/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
280,/encoder/encoders.6/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.6/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
281,/encoder/encoders.6/self_attn/Transpose_3,Transpose,"/encoder/encoders.6/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
282,/encoder/encoders.6/self_attn/Add,Eltwise_Binary,"/encoder/encoders.6/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.6/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
283,/encoder/encoders.6/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.6/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
284,/encoder/encoders.6/self_attn/Transpose_4,Transpose,"/encoder/encoders.6/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
285,/encoder/encoders.6/self_attn/MatMul,MatMul,"/encoder/encoders.6/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.6/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
286,/encoder/encoders.6/self_attn/Softmax,Softmax,"/encoder/encoders.6/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
287,/encoder/encoders.6/self_attn/MatMul_1,MatMul,"/encoder/encoders.6/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.6/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
288,/encoder/encoders.6/self_attn/Transpose_5,Transpose,"/encoder/encoders.6/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
289,/encoder/encoders.6/self_attn/Reshape_3,Reshape,"/encoder/encoders.6/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
290,/encoder/encoders.6/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.6/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.6.self_attn.linear_out.bias
,,,"onnx::MatMul_9192 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.6.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
291,/encoder/encoders.6/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.6/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
292,/encoder/encoders.6/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.6/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.6/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
293,/encoder/encoders.6/Add,Eltwise_Binary,"/encoder/encoders.5/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.6/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
294,/encoder/encoders.6/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.6/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9193 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9194 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
295,/encoder/encoders.6/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.6/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
296,/encoder/encoders.6/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.6/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.6/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.6.feed_forward.w_1.bias
,,,"onnx::MatMul_9195 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.6.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
297,/encoder/encoders.6/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.6/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.6/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
298,/encoder/encoders.6/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.6/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.6/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
299,/encoder/encoders.6/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.6/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.6/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
300,/encoder/encoders.6/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.6/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.6/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.6.feed_forward.w_2.bias
,,,"onnx::MatMul_9196 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.6.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
301,/encoder/encoders.6/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.6/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.6/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
302,/encoder/encoders.6/Add_1,Eltwise_Binary,"/encoder/encoders.6/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.6/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.6/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
303,/encoder/encoders.7/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.6/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9197 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9198 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
304,/encoder/encoders.7/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.7/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
305,/encoder/encoders.7/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.7/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.7.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9199 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.7.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
306,/encoder/encoders.7/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.7/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
307,/encoder/encoders.7/self_attn/Split,Split,"/encoder/encoders.7/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.7/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.7/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
308,/encoder/encoders.7/self_attn/Reshape,Reshape,"/encoder/encoders.7/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
309,/encoder/encoders.7/self_attn/Transpose,Transpose,"/encoder/encoders.7/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
310,/encoder/encoders.7/self_attn/Reshape_1,Reshape,"/encoder/encoders.7/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
311,/encoder/encoders.7/self_attn/Reshape_2,Reshape,"/encoder/encoders.7/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
312,/encoder/encoders.7/self_attn/Transpose_1,Transpose,"/encoder/encoders.7/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
313,/encoder/encoders.7/self_attn/Transpose_2,Transpose,"/encoder/encoders.7/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
314,/encoder/encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.7/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
315,/encoder/encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
316,/encoder/encoders.7/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.7.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
317,/encoder/encoders.7/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.7/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
318,/encoder/encoders.7/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.7/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
319,/encoder/encoders.7/self_attn/Transpose_3,Transpose,"/encoder/encoders.7/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
320,/encoder/encoders.7/self_attn/Add,Eltwise_Binary,"/encoder/encoders.7/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.7/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
321,/encoder/encoders.7/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.7/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
322,/encoder/encoders.7/self_attn/Transpose_4,Transpose,"/encoder/encoders.7/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
323,/encoder/encoders.7/self_attn/MatMul,MatMul,"/encoder/encoders.7/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.7/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
324,/encoder/encoders.7/self_attn/Softmax,Softmax,"/encoder/encoders.7/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
325,/encoder/encoders.7/self_attn/MatMul_1,MatMul,"/encoder/encoders.7/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.7/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
326,/encoder/encoders.7/self_attn/Transpose_5,Transpose,"/encoder/encoders.7/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
327,/encoder/encoders.7/self_attn/Reshape_3,Reshape,"/encoder/encoders.7/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
328,/encoder/encoders.7/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.7/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.7.self_attn.linear_out.bias
,,,"onnx::MatMul_9213 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.7.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
329,/encoder/encoders.7/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.7/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
330,/encoder/encoders.7/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.7/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.7/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
331,/encoder/encoders.7/Add,Eltwise_Binary,"/encoder/encoders.6/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.7/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
332,/encoder/encoders.7/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.7/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9214 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9215 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
333,/encoder/encoders.7/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.7/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
334,/encoder/encoders.7/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.7/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.7/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.7.feed_forward.w_1.bias
,,,"onnx::MatMul_9216 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.7.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
335,/encoder/encoders.7/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.7/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.7/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
336,/encoder/encoders.7/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.7/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.7/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
337,/encoder/encoders.7/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.7/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.7/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
338,/encoder/encoders.7/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.7/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.7/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.7.feed_forward.w_2.bias
,,,"onnx::MatMul_9217 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.7.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
339,/encoder/encoders.7/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.7/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.7/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
340,/encoder/encoders.7/Add_1,Eltwise_Binary,"/encoder/encoders.7/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.7/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.7/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
341,/encoder/encoders.8/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.7/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9218 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9219 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
342,/encoder/encoders.8/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.8/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
343,/encoder/encoders.8/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.8/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.8.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9220 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.8.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
344,/encoder/encoders.8/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.8/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
345,/encoder/encoders.8/self_attn/Split,Split,"/encoder/encoders.8/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.8/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.8/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
346,/encoder/encoders.8/self_attn/Reshape,Reshape,"/encoder/encoders.8/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
347,/encoder/encoders.8/self_attn/Transpose,Transpose,"/encoder/encoders.8/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
348,/encoder/encoders.8/self_attn/Reshape_1,Reshape,"/encoder/encoders.8/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
349,/encoder/encoders.8/self_attn/Reshape_2,Reshape,"/encoder/encoders.8/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
350,/encoder/encoders.8/self_attn/Transpose_1,Transpose,"/encoder/encoders.8/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
351,/encoder/encoders.8/self_attn/Transpose_2,Transpose,"/encoder/encoders.8/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
352,/encoder/encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.8/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
353,/encoder/encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
354,/encoder/encoders.8/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.8.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
355,/encoder/encoders.8/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.8/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
356,/encoder/encoders.8/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.8/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
357,/encoder/encoders.8/self_attn/Transpose_3,Transpose,"/encoder/encoders.8/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
358,/encoder/encoders.8/self_attn/Add,Eltwise_Binary,"/encoder/encoders.8/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.8/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
359,/encoder/encoders.8/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.8/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
360,/encoder/encoders.8/self_attn/Transpose_4,Transpose,"/encoder/encoders.8/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
361,/encoder/encoders.8/self_attn/MatMul,MatMul,"/encoder/encoders.8/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.8/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
362,/encoder/encoders.8/self_attn/Softmax,Softmax,"/encoder/encoders.8/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
363,/encoder/encoders.8/self_attn/MatMul_1,MatMul,"/encoder/encoders.8/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.8/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
364,/encoder/encoders.8/self_attn/Transpose_5,Transpose,"/encoder/encoders.8/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
365,/encoder/encoders.8/self_attn/Reshape_3,Reshape,"/encoder/encoders.8/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
366,/encoder/encoders.8/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.8/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.8.self_attn.linear_out.bias
,,,"onnx::MatMul_9234 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.8.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
367,/encoder/encoders.8/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.8/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
368,/encoder/encoders.8/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.8/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.8/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
369,/encoder/encoders.8/Add,Eltwise_Binary,"/encoder/encoders.7/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.8/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
370,/encoder/encoders.8/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.8/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9235 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9236 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
371,/encoder/encoders.8/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.8/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
372,/encoder/encoders.8/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.8/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.8/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.8.feed_forward.w_1.bias
,,,"onnx::MatMul_9237 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.8.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
373,/encoder/encoders.8/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.8/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.8/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
374,/encoder/encoders.8/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.8/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.8/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
375,/encoder/encoders.8/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.8/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.8/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
376,/encoder/encoders.8/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.8/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.8/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.8.feed_forward.w_2.bias
,,,"onnx::MatMul_9238 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.8.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
377,/encoder/encoders.8/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.8/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.8/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
378,/encoder/encoders.8/Add_1,Eltwise_Binary,"/encoder/encoders.8/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.8/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.8/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
379,/encoder/encoders.9/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.8/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9239 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9240 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
380,/encoder/encoders.9/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.9/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
381,/encoder/encoders.9/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.9/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.9.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9241 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.9.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
382,/encoder/encoders.9/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.9/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
383,/encoder/encoders.9/self_attn/Split,Split,"/encoder/encoders.9/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.9/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.9/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
384,/encoder/encoders.9/self_attn/Reshape,Reshape,"/encoder/encoders.9/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
385,/encoder/encoders.9/self_attn/Transpose,Transpose,"/encoder/encoders.9/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
386,/encoder/encoders.9/self_attn/Reshape_1,Reshape,"/encoder/encoders.9/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
387,/encoder/encoders.9/self_attn/Reshape_2,Reshape,"/encoder/encoders.9/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
388,/encoder/encoders.9/self_attn/Transpose_1,Transpose,"/encoder/encoders.9/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
389,/encoder/encoders.9/self_attn/Transpose_2,Transpose,"/encoder/encoders.9/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
390,/encoder/encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.9/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
391,/encoder/encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
392,/encoder/encoders.9/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.9.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
393,/encoder/encoders.9/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.9/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
394,/encoder/encoders.9/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.9/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
395,/encoder/encoders.9/self_attn/Transpose_3,Transpose,"/encoder/encoders.9/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
396,/encoder/encoders.9/self_attn/Add,Eltwise_Binary,"/encoder/encoders.9/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.9/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
397,/encoder/encoders.9/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.9/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
398,/encoder/encoders.9/self_attn/Transpose_4,Transpose,"/encoder/encoders.9/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
399,/encoder/encoders.9/self_attn/MatMul,MatMul,"/encoder/encoders.9/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.9/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
400,/encoder/encoders.9/self_attn/Softmax,Softmax,"/encoder/encoders.9/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
401,/encoder/encoders.9/self_attn/MatMul_1,MatMul,"/encoder/encoders.9/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.9/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
402,/encoder/encoders.9/self_attn/Transpose_5,Transpose,"/encoder/encoders.9/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
403,/encoder/encoders.9/self_attn/Reshape_3,Reshape,"/encoder/encoders.9/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
404,/encoder/encoders.9/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.9/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.9.self_attn.linear_out.bias
,,,"onnx::MatMul_9255 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.9.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
405,/encoder/encoders.9/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.9/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
406,/encoder/encoders.9/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.9/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.9/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
407,/encoder/encoders.9/Add,Eltwise_Binary,"/encoder/encoders.8/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.9/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
408,/encoder/encoders.9/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.9/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9256 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9257 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
409,/encoder/encoders.9/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.9/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
410,/encoder/encoders.9/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.9/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.9/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.9.feed_forward.w_1.bias
,,,"onnx::MatMul_9258 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.9.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
411,/encoder/encoders.9/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.9/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.9/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
412,/encoder/encoders.9/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.9/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.9/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
413,/encoder/encoders.9/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.9/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.9/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
414,/encoder/encoders.9/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.9/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.9/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.9.feed_forward.w_2.bias
,,,"onnx::MatMul_9259 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.9.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
415,/encoder/encoders.9/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.9/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.9/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
416,/encoder/encoders.9/Add_1,Eltwise_Binary,"/encoder/encoders.9/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.9/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.9/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
417,/encoder/encoders.10/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.9/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9260 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9261 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
418,/encoder/encoders.10/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.10/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
419,/encoder/encoders.10/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.10/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.10.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9262 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.10.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
420,/encoder/encoders.10/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.10/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
421,/encoder/encoders.10/self_attn/Split,Split,"/encoder/encoders.10/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.10/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.10/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
422,/encoder/encoders.10/self_attn/Reshape,Reshape,"/encoder/encoders.10/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
423,/encoder/encoders.10/self_attn/Transpose,Transpose,"/encoder/encoders.10/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
424,/encoder/encoders.10/self_attn/Reshape_1,Reshape,"/encoder/encoders.10/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
425,/encoder/encoders.10/self_attn/Reshape_2,Reshape,"/encoder/encoders.10/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
426,/encoder/encoders.10/self_attn/Transpose_1,Transpose,"/encoder/encoders.10/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
427,/encoder/encoders.10/self_attn/Transpose_2,Transpose,"/encoder/encoders.10/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
428,/encoder/encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.10/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
429,/encoder/encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
430,/encoder/encoders.10/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.10.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
431,/encoder/encoders.10/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.10/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
432,/encoder/encoders.10/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.10/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
433,/encoder/encoders.10/self_attn/Transpose_3,Transpose,"/encoder/encoders.10/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
434,/encoder/encoders.10/self_attn/Add,Eltwise_Binary,"/encoder/encoders.10/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.10/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
435,/encoder/encoders.10/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.10/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
436,/encoder/encoders.10/self_attn/Transpose_4,Transpose,"/encoder/encoders.10/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
437,/encoder/encoders.10/self_attn/MatMul,MatMul,"/encoder/encoders.10/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.10/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
438,/encoder/encoders.10/self_attn/Softmax,Softmax,"/encoder/encoders.10/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
439,/encoder/encoders.10/self_attn/MatMul_1,MatMul,"/encoder/encoders.10/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.10/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
440,/encoder/encoders.10/self_attn/Transpose_5,Transpose,"/encoder/encoders.10/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
441,/encoder/encoders.10/self_attn/Reshape_3,Reshape,"/encoder/encoders.10/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
442,/encoder/encoders.10/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.10/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.10.self_attn.linear_out.bias
,,,"onnx::MatMul_9276 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.10.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
443,/encoder/encoders.10/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.10/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
444,/encoder/encoders.10/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.10/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.10/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
445,/encoder/encoders.10/Add,Eltwise_Binary,"/encoder/encoders.9/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.10/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
446,/encoder/encoders.10/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.10/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9277 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9278 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
447,/encoder/encoders.10/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.10/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
448,/encoder/encoders.10/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.10/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.10/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.10.feed_forward.w_1.bias
,,,"onnx::MatMul_9279 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.10.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
449,/encoder/encoders.10/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.10/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.10/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
450,/encoder/encoders.10/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.10/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.10/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
451,/encoder/encoders.10/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.10/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.10/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
452,/encoder/encoders.10/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.10/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.10/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.10.feed_forward.w_2.bias
,,,"onnx::MatMul_9280 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.10.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
453,/encoder/encoders.10/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.10/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.10/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
454,/encoder/encoders.10/Add_1,Eltwise_Binary,"/encoder/encoders.10/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.10/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.10/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
455,/encoder/encoders.11/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.10/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9281 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9282 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
456,/encoder/encoders.11/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.11/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
457,/encoder/encoders.11/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.11/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.11.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9283 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.11.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
458,/encoder/encoders.11/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.11/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
459,/encoder/encoders.11/self_attn/Split,Split,"/encoder/encoders.11/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.11/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.11/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
460,/encoder/encoders.11/self_attn/Reshape,Reshape,"/encoder/encoders.11/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
461,/encoder/encoders.11/self_attn/Transpose,Transpose,"/encoder/encoders.11/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
462,/encoder/encoders.11/self_attn/Reshape_1,Reshape,"/encoder/encoders.11/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
463,/encoder/encoders.11/self_attn/Reshape_2,Reshape,"/encoder/encoders.11/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
464,/encoder/encoders.11/self_attn/Transpose_1,Transpose,"/encoder/encoders.11/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
465,/encoder/encoders.11/self_attn/Transpose_2,Transpose,"/encoder/encoders.11/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
466,/encoder/encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.11/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
467,/encoder/encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
468,/encoder/encoders.11/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.11.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
469,/encoder/encoders.11/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.11/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
470,/encoder/encoders.11/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.11/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
471,/encoder/encoders.11/self_attn/Transpose_3,Transpose,"/encoder/encoders.11/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
472,/encoder/encoders.11/self_attn/Add,Eltwise_Binary,"/encoder/encoders.11/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.11/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
473,/encoder/encoders.11/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.11/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
474,/encoder/encoders.11/self_attn/Transpose_4,Transpose,"/encoder/encoders.11/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
475,/encoder/encoders.11/self_attn/MatMul,MatMul,"/encoder/encoders.11/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.11/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
476,/encoder/encoders.11/self_attn/Softmax,Softmax,"/encoder/encoders.11/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
477,/encoder/encoders.11/self_attn/MatMul_1,MatMul,"/encoder/encoders.11/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.11/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
478,/encoder/encoders.11/self_attn/Transpose_5,Transpose,"/encoder/encoders.11/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
479,/encoder/encoders.11/self_attn/Reshape_3,Reshape,"/encoder/encoders.11/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
480,/encoder/encoders.11/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.11/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.11.self_attn.linear_out.bias
,,,"onnx::MatMul_9297 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.11.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
481,/encoder/encoders.11/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.11/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
482,/encoder/encoders.11/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.11/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.11/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
483,/encoder/encoders.11/Add,Eltwise_Binary,"/encoder/encoders.10/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.11/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
484,/encoder/encoders.11/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.11/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9298 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9299 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
485,/encoder/encoders.11/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.11/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
486,/encoder/encoders.11/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.11/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.11/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.11.feed_forward.w_1.bias
,,,"onnx::MatMul_9300 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.11.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
487,/encoder/encoders.11/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.11/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.11/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
488,/encoder/encoders.11/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.11/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.11/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
489,/encoder/encoders.11/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.11/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.11/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
490,/encoder/encoders.11/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.11/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.11/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.11.feed_forward.w_2.bias
,,,"onnx::MatMul_9301 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.11.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
491,/encoder/encoders.11/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.11/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.11/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
492,/encoder/encoders.11/Add_1,Eltwise_Binary,"/encoder/encoders.11/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.11/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.11/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
493,/encoder/encoders.12/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.11/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9302 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9303 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
494,/encoder/encoders.12/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.12/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
495,/encoder/encoders.12/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.12/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.12.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9304 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.12.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
496,/encoder/encoders.12/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.12/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
497,/encoder/encoders.12/self_attn/Split,Split,"/encoder/encoders.12/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.12/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.12/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
498,/encoder/encoders.12/self_attn/Reshape,Reshape,"/encoder/encoders.12/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
499,/encoder/encoders.12/self_attn/Transpose,Transpose,"/encoder/encoders.12/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
500,/encoder/encoders.12/self_attn/Reshape_1,Reshape,"/encoder/encoders.12/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
501,/encoder/encoders.12/self_attn/Reshape_2,Reshape,"/encoder/encoders.12/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
502,/encoder/encoders.12/self_attn/Transpose_1,Transpose,"/encoder/encoders.12/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
503,/encoder/encoders.12/self_attn/Transpose_2,Transpose,"/encoder/encoders.12/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
504,/encoder/encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.12/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
505,/encoder/encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
506,/encoder/encoders.12/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.12.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
507,/encoder/encoders.12/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.12/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
508,/encoder/encoders.12/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.12/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
509,/encoder/encoders.12/self_attn/Transpose_3,Transpose,"/encoder/encoders.12/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
510,/encoder/encoders.12/self_attn/Add,Eltwise_Binary,"/encoder/encoders.12/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.12/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
511,/encoder/encoders.12/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.12/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
512,/encoder/encoders.12/self_attn/Transpose_4,Transpose,"/encoder/encoders.12/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
513,/encoder/encoders.12/self_attn/MatMul,MatMul,"/encoder/encoders.12/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.12/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
514,/encoder/encoders.12/self_attn/Softmax,Softmax,"/encoder/encoders.12/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
515,/encoder/encoders.12/self_attn/MatMul_1,MatMul,"/encoder/encoders.12/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.12/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
516,/encoder/encoders.12/self_attn/Transpose_5,Transpose,"/encoder/encoders.12/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
517,/encoder/encoders.12/self_attn/Reshape_3,Reshape,"/encoder/encoders.12/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
518,/encoder/encoders.12/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.12/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.12.self_attn.linear_out.bias
,,,"onnx::MatMul_9318 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.12.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
519,/encoder/encoders.12/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.12/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
520,/encoder/encoders.12/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.12/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.12/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
521,/encoder/encoders.12/Add,Eltwise_Binary,"/encoder/encoders.11/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.12/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
522,/encoder/encoders.12/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.12/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9319 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9320 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
523,/encoder/encoders.12/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.12/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
524,/encoder/encoders.12/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.12/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.12/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.12.feed_forward.w_1.bias
,,,"onnx::MatMul_9321 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.12.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
525,/encoder/encoders.12/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.12/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.12/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
526,/encoder/encoders.12/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.12/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.12/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
527,/encoder/encoders.12/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.12/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.12/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
528,/encoder/encoders.12/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.12/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.12/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.12.feed_forward.w_2.bias
,,,"onnx::MatMul_9322 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.12.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
529,/encoder/encoders.12/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.12/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.12/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
530,/encoder/encoders.12/Add_1,Eltwise_Binary,"/encoder/encoders.12/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.12/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.12/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
531,/encoder/encoders.13/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.12/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9323 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9324 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
532,/encoder/encoders.13/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.13/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
533,/encoder/encoders.13/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.13/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.13.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9325 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.13.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
534,/encoder/encoders.13/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.13/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
535,/encoder/encoders.13/self_attn/Split,Split,"/encoder/encoders.13/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.13/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.13/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
536,/encoder/encoders.13/self_attn/Reshape,Reshape,"/encoder/encoders.13/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
537,/encoder/encoders.13/self_attn/Transpose,Transpose,"/encoder/encoders.13/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
538,/encoder/encoders.13/self_attn/Reshape_1,Reshape,"/encoder/encoders.13/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
539,/encoder/encoders.13/self_attn/Reshape_2,Reshape,"/encoder/encoders.13/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
540,/encoder/encoders.13/self_attn/Transpose_1,Transpose,"/encoder/encoders.13/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
541,/encoder/encoders.13/self_attn/Transpose_2,Transpose,"/encoder/encoders.13/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
542,/encoder/encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.13/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
543,/encoder/encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
544,/encoder/encoders.13/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.13.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
545,/encoder/encoders.13/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.13/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
546,/encoder/encoders.13/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.13/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
547,/encoder/encoders.13/self_attn/Transpose_3,Transpose,"/encoder/encoders.13/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
548,/encoder/encoders.13/self_attn/Add,Eltwise_Binary,"/encoder/encoders.13/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.13/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
549,/encoder/encoders.13/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.13/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
550,/encoder/encoders.13/self_attn/Transpose_4,Transpose,"/encoder/encoders.13/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
551,/encoder/encoders.13/self_attn/MatMul,MatMul,"/encoder/encoders.13/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.13/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
552,/encoder/encoders.13/self_attn/Softmax,Softmax,"/encoder/encoders.13/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
553,/encoder/encoders.13/self_attn/MatMul_1,MatMul,"/encoder/encoders.13/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.13/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
554,/encoder/encoders.13/self_attn/Transpose_5,Transpose,"/encoder/encoders.13/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
555,/encoder/encoders.13/self_attn/Reshape_3,Reshape,"/encoder/encoders.13/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
556,/encoder/encoders.13/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.13/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.13.self_attn.linear_out.bias
,,,"onnx::MatMul_9339 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.13.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
557,/encoder/encoders.13/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.13/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
558,/encoder/encoders.13/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.13/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.13/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
559,/encoder/encoders.13/Add,Eltwise_Binary,"/encoder/encoders.12/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.13/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
560,/encoder/encoders.13/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.13/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9340 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9341 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
561,/encoder/encoders.13/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.13/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
562,/encoder/encoders.13/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.13/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.13/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.13.feed_forward.w_1.bias
,,,"onnx::MatMul_9342 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.13.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
563,/encoder/encoders.13/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.13/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.13/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
564,/encoder/encoders.13/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.13/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.13/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
565,/encoder/encoders.13/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.13/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.13/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
566,/encoder/encoders.13/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.13/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.13/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.13.feed_forward.w_2.bias
,,,"onnx::MatMul_9343 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.13.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
567,/encoder/encoders.13/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.13/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.13/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
568,/encoder/encoders.13/Add_1,Eltwise_Binary,"/encoder/encoders.13/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.13/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.13/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
569,/encoder/encoders.14/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.13/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9344 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9345 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
570,/encoder/encoders.14/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.14/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
571,/encoder/encoders.14/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.14/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.14.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9346 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.14.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
572,/encoder/encoders.14/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.14/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
573,/encoder/encoders.14/self_attn/Split,Split,"/encoder/encoders.14/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.14/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.14/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
574,/encoder/encoders.14/self_attn/Reshape,Reshape,"/encoder/encoders.14/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
575,/encoder/encoders.14/self_attn/Transpose,Transpose,"/encoder/encoders.14/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
576,/encoder/encoders.14/self_attn/Reshape_1,Reshape,"/encoder/encoders.14/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
577,/encoder/encoders.14/self_attn/Reshape_2,Reshape,"/encoder/encoders.14/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
578,/encoder/encoders.14/self_attn/Transpose_1,Transpose,"/encoder/encoders.14/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
579,/encoder/encoders.14/self_attn/Transpose_2,Transpose,"/encoder/encoders.14/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
580,/encoder/encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.14/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
581,/encoder/encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
582,/encoder/encoders.14/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.14.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
583,/encoder/encoders.14/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.14/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
584,/encoder/encoders.14/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.14/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
585,/encoder/encoders.14/self_attn/Transpose_3,Transpose,"/encoder/encoders.14/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
586,/encoder/encoders.14/self_attn/Add,Eltwise_Binary,"/encoder/encoders.14/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.14/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
587,/encoder/encoders.14/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.14/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
588,/encoder/encoders.14/self_attn/Transpose_4,Transpose,"/encoder/encoders.14/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
589,/encoder/encoders.14/self_attn/MatMul,MatMul,"/encoder/encoders.14/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.14/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
590,/encoder/encoders.14/self_attn/Softmax,Softmax,"/encoder/encoders.14/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
591,/encoder/encoders.14/self_attn/MatMul_1,MatMul,"/encoder/encoders.14/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.14/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
592,/encoder/encoders.14/self_attn/Transpose_5,Transpose,"/encoder/encoders.14/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
593,/encoder/encoders.14/self_attn/Reshape_3,Reshape,"/encoder/encoders.14/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
594,/encoder/encoders.14/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.14/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.14.self_attn.linear_out.bias
,,,"onnx::MatMul_9360 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.14.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
595,/encoder/encoders.14/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.14/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
596,/encoder/encoders.14/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.14/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.14/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
597,/encoder/encoders.14/Add,Eltwise_Binary,"/encoder/encoders.13/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.14/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
598,/encoder/encoders.14/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.14/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9361 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9362 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
599,/encoder/encoders.14/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.14/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
600,/encoder/encoders.14/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.14/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.14/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.14.feed_forward.w_1.bias
,,,"onnx::MatMul_9363 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.14.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
601,/encoder/encoders.14/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.14/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.14/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
602,/encoder/encoders.14/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.14/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.14/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
603,/encoder/encoders.14/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.14/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.14/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
604,/encoder/encoders.14/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.14/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.14/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.14.feed_forward.w_2.bias
,,,"onnx::MatMul_9364 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.14.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
605,/encoder/encoders.14/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.14/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.14/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
606,/encoder/encoders.14/Add_1,Eltwise_Binary,"/encoder/encoders.14/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.14/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.14/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
607,/encoder/encoders.15/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.14/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9365 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9366 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
608,/encoder/encoders.15/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.15/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
609,/encoder/encoders.15/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.15/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.15.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9367 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.15.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
610,/encoder/encoders.15/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.15/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
611,/encoder/encoders.15/self_attn/Split,Split,"/encoder/encoders.15/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.15/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.15/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
612,/encoder/encoders.15/self_attn/Reshape,Reshape,"/encoder/encoders.15/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
613,/encoder/encoders.15/self_attn/Transpose,Transpose,"/encoder/encoders.15/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
614,/encoder/encoders.15/self_attn/Reshape_1,Reshape,"/encoder/encoders.15/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
615,/encoder/encoders.15/self_attn/Reshape_2,Reshape,"/encoder/encoders.15/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
616,/encoder/encoders.15/self_attn/Transpose_1,Transpose,"/encoder/encoders.15/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
617,/encoder/encoders.15/self_attn/Transpose_2,Transpose,"/encoder/encoders.15/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
618,/encoder/encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.15/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
619,/encoder/encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
620,/encoder/encoders.15/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.15.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
621,/encoder/encoders.15/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.15/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
622,/encoder/encoders.15/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.15/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
623,/encoder/encoders.15/self_attn/Transpose_3,Transpose,"/encoder/encoders.15/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
624,/encoder/encoders.15/self_attn/Add,Eltwise_Binary,"/encoder/encoders.15/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.15/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
625,/encoder/encoders.15/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.15/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
626,/encoder/encoders.15/self_attn/Transpose_4,Transpose,"/encoder/encoders.15/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
627,/encoder/encoders.15/self_attn/MatMul,MatMul,"/encoder/encoders.15/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.15/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
628,/encoder/encoders.15/self_attn/Softmax,Softmax,"/encoder/encoders.15/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
629,/encoder/encoders.15/self_attn/MatMul_1,MatMul,"/encoder/encoders.15/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.15/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
630,/encoder/encoders.15/self_attn/Transpose_5,Transpose,"/encoder/encoders.15/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
631,/encoder/encoders.15/self_attn/Reshape_3,Reshape,"/encoder/encoders.15/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
632,/encoder/encoders.15/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.15/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.15.self_attn.linear_out.bias
,,,"onnx::MatMul_9381 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.15.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
633,/encoder/encoders.15/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.15/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
634,/encoder/encoders.15/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.15/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.15/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
635,/encoder/encoders.15/Add,Eltwise_Binary,"/encoder/encoders.14/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.15/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
636,/encoder/encoders.15/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.15/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9382 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9383 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
637,/encoder/encoders.15/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.15/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
638,/encoder/encoders.15/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.15/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.15/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.15.feed_forward.w_1.bias
,,,"onnx::MatMul_9384 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.15.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
639,/encoder/encoders.15/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.15/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.15/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
640,/encoder/encoders.15/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.15/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.15/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
641,/encoder/encoders.15/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.15/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.15/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
642,/encoder/encoders.15/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.15/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.15/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.15.feed_forward.w_2.bias
,,,"onnx::MatMul_9385 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.15.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
643,/encoder/encoders.15/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.15/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.15/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
644,/encoder/encoders.15/Add_1,Eltwise_Binary,"/encoder/encoders.15/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.15/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.15/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
645,/encoder/encoders.16/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.15/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9386 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9387 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
646,/encoder/encoders.16/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.16/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
647,/encoder/encoders.16/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.16/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.16.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9388 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.16.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
648,/encoder/encoders.16/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.16/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
649,/encoder/encoders.16/self_attn/Split,Split,"/encoder/encoders.16/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.16/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.16/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
650,/encoder/encoders.16/self_attn/Reshape,Reshape,"/encoder/encoders.16/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
651,/encoder/encoders.16/self_attn/Transpose,Transpose,"/encoder/encoders.16/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
652,/encoder/encoders.16/self_attn/Reshape_1,Reshape,"/encoder/encoders.16/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
653,/encoder/encoders.16/self_attn/Reshape_2,Reshape,"/encoder/encoders.16/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
654,/encoder/encoders.16/self_attn/Transpose_1,Transpose,"/encoder/encoders.16/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
655,/encoder/encoders.16/self_attn/Transpose_2,Transpose,"/encoder/encoders.16/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
656,/encoder/encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.16/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
657,/encoder/encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
658,/encoder/encoders.16/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.16.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
659,/encoder/encoders.16/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.16/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
660,/encoder/encoders.16/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.16/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
661,/encoder/encoders.16/self_attn/Transpose_3,Transpose,"/encoder/encoders.16/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
662,/encoder/encoders.16/self_attn/Add,Eltwise_Binary,"/encoder/encoders.16/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.16/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
663,/encoder/encoders.16/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.16/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
664,/encoder/encoders.16/self_attn/Transpose_4,Transpose,"/encoder/encoders.16/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
665,/encoder/encoders.16/self_attn/MatMul,MatMul,"/encoder/encoders.16/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.16/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
666,/encoder/encoders.16/self_attn/Softmax,Softmax,"/encoder/encoders.16/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
667,/encoder/encoders.16/self_attn/MatMul_1,MatMul,"/encoder/encoders.16/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.16/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
668,/encoder/encoders.16/self_attn/Transpose_5,Transpose,"/encoder/encoders.16/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
669,/encoder/encoders.16/self_attn/Reshape_3,Reshape,"/encoder/encoders.16/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
670,/encoder/encoders.16/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.16/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.16.self_attn.linear_out.bias
,,,"onnx::MatMul_9402 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.16.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
671,/encoder/encoders.16/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.16/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
672,/encoder/encoders.16/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.16/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.16/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
673,/encoder/encoders.16/Add,Eltwise_Binary,"/encoder/encoders.15/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.16/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
674,/encoder/encoders.16/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.16/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9403 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9404 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
675,/encoder/encoders.16/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.16/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
676,/encoder/encoders.16/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.16/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.16/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.16.feed_forward.w_1.bias
,,,"onnx::MatMul_9405 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.16.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
677,/encoder/encoders.16/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.16/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.16/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
678,/encoder/encoders.16/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.16/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.16/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
679,/encoder/encoders.16/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.16/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.16/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
680,/encoder/encoders.16/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.16/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.16/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.16.feed_forward.w_2.bias
,,,"onnx::MatMul_9406 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.16.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
681,/encoder/encoders.16/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.16/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.16/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
682,/encoder/encoders.16/Add_1,Eltwise_Binary,"/encoder/encoders.16/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.16/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.16/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
683,/encoder/encoders.17/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.16/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9407 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9408 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
684,/encoder/encoders.17/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.17/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
685,/encoder/encoders.17/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.17/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.17.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9409 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.17.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
686,/encoder/encoders.17/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.17/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
687,/encoder/encoders.17/self_attn/Split,Split,"/encoder/encoders.17/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.17/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.17/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
688,/encoder/encoders.17/self_attn/Reshape,Reshape,"/encoder/encoders.17/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
689,/encoder/encoders.17/self_attn/Transpose,Transpose,"/encoder/encoders.17/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
690,/encoder/encoders.17/self_attn/Reshape_1,Reshape,"/encoder/encoders.17/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
691,/encoder/encoders.17/self_attn/Reshape_2,Reshape,"/encoder/encoders.17/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
692,/encoder/encoders.17/self_attn/Transpose_1,Transpose,"/encoder/encoders.17/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
693,/encoder/encoders.17/self_attn/Transpose_2,Transpose,"/encoder/encoders.17/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
694,/encoder/encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.17/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
695,/encoder/encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
696,/encoder/encoders.17/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.17.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
697,/encoder/encoders.17/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.17/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
698,/encoder/encoders.17/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.17/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
699,/encoder/encoders.17/self_attn/Transpose_3,Transpose,"/encoder/encoders.17/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
700,/encoder/encoders.17/self_attn/Add,Eltwise_Binary,"/encoder/encoders.17/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.17/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
701,/encoder/encoders.17/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.17/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
702,/encoder/encoders.17/self_attn/Transpose_4,Transpose,"/encoder/encoders.17/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
703,/encoder/encoders.17/self_attn/MatMul,MatMul,"/encoder/encoders.17/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.17/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
704,/encoder/encoders.17/self_attn/Softmax,Softmax,"/encoder/encoders.17/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
705,/encoder/encoders.17/self_attn/MatMul_1,MatMul,"/encoder/encoders.17/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.17/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
706,/encoder/encoders.17/self_attn/Transpose_5,Transpose,"/encoder/encoders.17/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
707,/encoder/encoders.17/self_attn/Reshape_3,Reshape,"/encoder/encoders.17/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
708,/encoder/encoders.17/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.17/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.17.self_attn.linear_out.bias
,,,"onnx::MatMul_9423 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.17.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
709,/encoder/encoders.17/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.17/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
710,/encoder/encoders.17/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.17/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.17/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
711,/encoder/encoders.17/Add,Eltwise_Binary,"/encoder/encoders.16/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.17/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
712,/encoder/encoders.17/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.17/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9424 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9425 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
713,/encoder/encoders.17/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.17/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
714,/encoder/encoders.17/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.17/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.17/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.17.feed_forward.w_1.bias
,,,"onnx::MatMul_9426 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.17.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
715,/encoder/encoders.17/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.17/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.17/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
716,/encoder/encoders.17/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.17/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.17/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
717,/encoder/encoders.17/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.17/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.17/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
718,/encoder/encoders.17/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.17/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.17/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.17.feed_forward.w_2.bias
,,,"onnx::MatMul_9427 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.17.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
719,/encoder/encoders.17/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.17/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.17/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
720,/encoder/encoders.17/Add_1,Eltwise_Binary,"/encoder/encoders.17/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.17/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.17/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
721,/encoder/encoders.18/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.17/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9428 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9429 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
722,/encoder/encoders.18/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.18/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
723,/encoder/encoders.18/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.18/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.18.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9430 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.18.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
724,/encoder/encoders.18/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.18/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
725,/encoder/encoders.18/self_attn/Split,Split,"/encoder/encoders.18/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.18/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.18/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
726,/encoder/encoders.18/self_attn/Reshape,Reshape,"/encoder/encoders.18/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
727,/encoder/encoders.18/self_attn/Transpose,Transpose,"/encoder/encoders.18/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
728,/encoder/encoders.18/self_attn/Reshape_1,Reshape,"/encoder/encoders.18/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
729,/encoder/encoders.18/self_attn/Reshape_2,Reshape,"/encoder/encoders.18/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
730,/encoder/encoders.18/self_attn/Transpose_1,Transpose,"/encoder/encoders.18/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
731,/encoder/encoders.18/self_attn/Transpose_2,Transpose,"/encoder/encoders.18/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
732,/encoder/encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.18/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
733,/encoder/encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
734,/encoder/encoders.18/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.18.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
735,/encoder/encoders.18/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.18/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
736,/encoder/encoders.18/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.18/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
737,/encoder/encoders.18/self_attn/Transpose_3,Transpose,"/encoder/encoders.18/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
738,/encoder/encoders.18/self_attn/Add,Eltwise_Binary,"/encoder/encoders.18/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.18/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
739,/encoder/encoders.18/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.18/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
740,/encoder/encoders.18/self_attn/Transpose_4,Transpose,"/encoder/encoders.18/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
741,/encoder/encoders.18/self_attn/MatMul,MatMul,"/encoder/encoders.18/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.18/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
742,/encoder/encoders.18/self_attn/Softmax,Softmax,"/encoder/encoders.18/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
743,/encoder/encoders.18/self_attn/MatMul_1,MatMul,"/encoder/encoders.18/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.18/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
744,/encoder/encoders.18/self_attn/Transpose_5,Transpose,"/encoder/encoders.18/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
745,/encoder/encoders.18/self_attn/Reshape_3,Reshape,"/encoder/encoders.18/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
746,/encoder/encoders.18/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.18/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.18.self_attn.linear_out.bias
,,,"onnx::MatMul_9444 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.18.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
747,/encoder/encoders.18/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.18/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
748,/encoder/encoders.18/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.18/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.18/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
749,/encoder/encoders.18/Add,Eltwise_Binary,"/encoder/encoders.17/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.18/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
750,/encoder/encoders.18/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.18/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9445 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9446 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
751,/encoder/encoders.18/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.18/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
752,/encoder/encoders.18/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.18/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.18/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.18.feed_forward.w_1.bias
,,,"onnx::MatMul_9447 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.18.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
753,/encoder/encoders.18/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.18/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.18/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
754,/encoder/encoders.18/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.18/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.18/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
755,/encoder/encoders.18/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.18/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.18/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
756,/encoder/encoders.18/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.18/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.18/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.18.feed_forward.w_2.bias
,,,"onnx::MatMul_9448 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.18.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
757,/encoder/encoders.18/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.18/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.18/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
758,/encoder/encoders.18/Add_1,Eltwise_Binary,"/encoder/encoders.18/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.18/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.18/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
759,/encoder/encoders.19/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.18/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9449 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9450 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
760,/encoder/encoders.19/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.19/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
761,/encoder/encoders.19/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.19/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.19.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9451 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.19.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
762,/encoder/encoders.19/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.19/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
763,/encoder/encoders.19/self_attn/Split,Split,"/encoder/encoders.19/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.19/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.19/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
764,/encoder/encoders.19/self_attn/Reshape,Reshape,"/encoder/encoders.19/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
765,/encoder/encoders.19/self_attn/Transpose,Transpose,"/encoder/encoders.19/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
766,/encoder/encoders.19/self_attn/Reshape_1,Reshape,"/encoder/encoders.19/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
767,/encoder/encoders.19/self_attn/Reshape_2,Reshape,"/encoder/encoders.19/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
768,/encoder/encoders.19/self_attn/Transpose_1,Transpose,"/encoder/encoders.19/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
769,/encoder/encoders.19/self_attn/Transpose_2,Transpose,"/encoder/encoders.19/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
770,/encoder/encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.19/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
771,/encoder/encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
772,/encoder/encoders.19/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.19.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
773,/encoder/encoders.19/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.19/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
774,/encoder/encoders.19/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.19/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
775,/encoder/encoders.19/self_attn/Transpose_3,Transpose,"/encoder/encoders.19/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
776,/encoder/encoders.19/self_attn/Add,Eltwise_Binary,"/encoder/encoders.19/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.19/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
777,/encoder/encoders.19/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.19/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
778,/encoder/encoders.19/self_attn/Transpose_4,Transpose,"/encoder/encoders.19/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
779,/encoder/encoders.19/self_attn/MatMul,MatMul,"/encoder/encoders.19/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.19/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
780,/encoder/encoders.19/self_attn/Softmax,Softmax,"/encoder/encoders.19/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
781,/encoder/encoders.19/self_attn/MatMul_1,MatMul,"/encoder/encoders.19/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.19/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
782,/encoder/encoders.19/self_attn/Transpose_5,Transpose,"/encoder/encoders.19/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
783,/encoder/encoders.19/self_attn/Reshape_3,Reshape,"/encoder/encoders.19/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
784,/encoder/encoders.19/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.19/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.19.self_attn.linear_out.bias
,,,"onnx::MatMul_9465 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.19.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
785,/encoder/encoders.19/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.19/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
786,/encoder/encoders.19/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.19/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.19/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
787,/encoder/encoders.19/Add,Eltwise_Binary,"/encoder/encoders.18/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.19/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
788,/encoder/encoders.19/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.19/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9466 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9467 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
789,/encoder/encoders.19/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.19/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
790,/encoder/encoders.19/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.19/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.19/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.19.feed_forward.w_1.bias
,,,"onnx::MatMul_9468 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.19.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
791,/encoder/encoders.19/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.19/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.19/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
792,/encoder/encoders.19/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.19/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.19/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
793,/encoder/encoders.19/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.19/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.19/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
794,/encoder/encoders.19/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.19/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.19/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.19.feed_forward.w_2.bias
,,,"onnx::MatMul_9469 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.19.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
795,/encoder/encoders.19/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.19/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.19/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
796,/encoder/encoders.19/Add_1,Eltwise_Binary,"/encoder/encoders.19/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.19/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.19/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
797,/encoder/encoders.20/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.19/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9470 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9471 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
798,/encoder/encoders.20/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.20/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
799,/encoder/encoders.20/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.20/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.20.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9472 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.20.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
800,/encoder/encoders.20/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.20/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
801,/encoder/encoders.20/self_attn/Split,Split,"/encoder/encoders.20/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.20/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.20/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
802,/encoder/encoders.20/self_attn/Reshape,Reshape,"/encoder/encoders.20/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
803,/encoder/encoders.20/self_attn/Transpose,Transpose,"/encoder/encoders.20/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
804,/encoder/encoders.20/self_attn/Reshape_1,Reshape,"/encoder/encoders.20/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
805,/encoder/encoders.20/self_attn/Reshape_2,Reshape,"/encoder/encoders.20/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
806,/encoder/encoders.20/self_attn/Transpose_1,Transpose,"/encoder/encoders.20/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
807,/encoder/encoders.20/self_attn/Transpose_2,Transpose,"/encoder/encoders.20/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
808,/encoder/encoders.20/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.20/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
809,/encoder/encoders.20/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.20/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
810,/encoder/encoders.20/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.20/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.20.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
811,/encoder/encoders.20/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.20/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
812,/encoder/encoders.20/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.20/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
813,/encoder/encoders.20/self_attn/Transpose_3,Transpose,"/encoder/encoders.20/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
814,/encoder/encoders.20/self_attn/Add,Eltwise_Binary,"/encoder/encoders.20/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.20/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
815,/encoder/encoders.20/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.20/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
816,/encoder/encoders.20/self_attn/Transpose_4,Transpose,"/encoder/encoders.20/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
817,/encoder/encoders.20/self_attn/MatMul,MatMul,"/encoder/encoders.20/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.20/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
818,/encoder/encoders.20/self_attn/Softmax,Softmax,"/encoder/encoders.20/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
819,/encoder/encoders.20/self_attn/MatMul_1,MatMul,"/encoder/encoders.20/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.20/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
820,/encoder/encoders.20/self_attn/Transpose_5,Transpose,"/encoder/encoders.20/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
821,/encoder/encoders.20/self_attn/Reshape_3,Reshape,"/encoder/encoders.20/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
822,/encoder/encoders.20/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.20/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.20.self_attn.linear_out.bias
,,,"onnx::MatMul_9486 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.20.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
823,/encoder/encoders.20/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.20/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
824,/encoder/encoders.20/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.20/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.20/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
825,/encoder/encoders.20/Add,Eltwise_Binary,"/encoder/encoders.19/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.20/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
826,/encoder/encoders.20/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.20/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9487 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9488 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
827,/encoder/encoders.20/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.20/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
828,/encoder/encoders.20/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.20/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.20/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.20.feed_forward.w_1.bias
,,,"onnx::MatMul_9489 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.20.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
829,/encoder/encoders.20/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.20/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.20/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
830,/encoder/encoders.20/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.20/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.20/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
831,/encoder/encoders.20/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.20/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.20/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
832,/encoder/encoders.20/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.20/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.20/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.20.feed_forward.w_2.bias
,,,"onnx::MatMul_9490 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.20.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
833,/encoder/encoders.20/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.20/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.20/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
834,/encoder/encoders.20/Add_1,Eltwise_Binary,"/encoder/encoders.20/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.20/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.20/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
835,/encoder/encoders.21/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.20/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9491 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9492 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
836,/encoder/encoders.21/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.21/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
837,/encoder/encoders.21/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.21/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.21.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9493 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.21.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
838,/encoder/encoders.21/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.21/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
839,/encoder/encoders.21/self_attn/Split,Split,"/encoder/encoders.21/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.21/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.21/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
840,/encoder/encoders.21/self_attn/Reshape,Reshape,"/encoder/encoders.21/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
841,/encoder/encoders.21/self_attn/Transpose,Transpose,"/encoder/encoders.21/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
842,/encoder/encoders.21/self_attn/Reshape_1,Reshape,"/encoder/encoders.21/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
843,/encoder/encoders.21/self_attn/Reshape_2,Reshape,"/encoder/encoders.21/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
844,/encoder/encoders.21/self_attn/Transpose_1,Transpose,"/encoder/encoders.21/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
845,/encoder/encoders.21/self_attn/Transpose_2,Transpose,"/encoder/encoders.21/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
846,/encoder/encoders.21/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.21/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
847,/encoder/encoders.21/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.21/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
848,/encoder/encoders.21/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.21/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.21.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
849,/encoder/encoders.21/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.21/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
850,/encoder/encoders.21/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.21/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
851,/encoder/encoders.21/self_attn/Transpose_3,Transpose,"/encoder/encoders.21/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
852,/encoder/encoders.21/self_attn/Add,Eltwise_Binary,"/encoder/encoders.21/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.21/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
853,/encoder/encoders.21/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.21/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
854,/encoder/encoders.21/self_attn/Transpose_4,Transpose,"/encoder/encoders.21/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
855,/encoder/encoders.21/self_attn/MatMul,MatMul,"/encoder/encoders.21/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.21/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
856,/encoder/encoders.21/self_attn/Softmax,Softmax,"/encoder/encoders.21/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
857,/encoder/encoders.21/self_attn/MatMul_1,MatMul,"/encoder/encoders.21/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.21/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
858,/encoder/encoders.21/self_attn/Transpose_5,Transpose,"/encoder/encoders.21/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
859,/encoder/encoders.21/self_attn/Reshape_3,Reshape,"/encoder/encoders.21/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
860,/encoder/encoders.21/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.21/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.21.self_attn.linear_out.bias
,,,"onnx::MatMul_9507 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.21.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
861,/encoder/encoders.21/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.21/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
862,/encoder/encoders.21/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.21/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.21/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
863,/encoder/encoders.21/Add,Eltwise_Binary,"/encoder/encoders.20/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.21/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
864,/encoder/encoders.21/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.21/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9508 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9509 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
865,/encoder/encoders.21/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.21/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
866,/encoder/encoders.21/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.21/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.21/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.21.feed_forward.w_1.bias
,,,"onnx::MatMul_9510 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.21.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
867,/encoder/encoders.21/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.21/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.21/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
868,/encoder/encoders.21/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.21/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.21/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
869,/encoder/encoders.21/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.21/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.21/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
870,/encoder/encoders.21/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.21/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.21/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.21.feed_forward.w_2.bias
,,,"onnx::MatMul_9511 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.21.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
871,/encoder/encoders.21/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.21/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.21/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
872,/encoder/encoders.21/Add_1,Eltwise_Binary,"/encoder/encoders.21/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.21/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.21/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
873,/encoder/encoders.22/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.21/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9512 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9513 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
874,/encoder/encoders.22/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.22/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
875,/encoder/encoders.22/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.22/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.22.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9514 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.22.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
876,/encoder/encoders.22/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.22/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
877,/encoder/encoders.22/self_attn/Split,Split,"/encoder/encoders.22/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.22/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.22/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
878,/encoder/encoders.22/self_attn/Reshape,Reshape,"/encoder/encoders.22/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
879,/encoder/encoders.22/self_attn/Transpose,Transpose,"/encoder/encoders.22/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
880,/encoder/encoders.22/self_attn/Reshape_1,Reshape,"/encoder/encoders.22/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
881,/encoder/encoders.22/self_attn/Reshape_2,Reshape,"/encoder/encoders.22/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
882,/encoder/encoders.22/self_attn/Transpose_1,Transpose,"/encoder/encoders.22/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
883,/encoder/encoders.22/self_attn/Transpose_2,Transpose,"/encoder/encoders.22/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
884,/encoder/encoders.22/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.22/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
885,/encoder/encoders.22/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.22/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
886,/encoder/encoders.22/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.22/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.22.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
887,/encoder/encoders.22/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.22/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
888,/encoder/encoders.22/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.22/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
889,/encoder/encoders.22/self_attn/Transpose_3,Transpose,"/encoder/encoders.22/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
890,/encoder/encoders.22/self_attn/Add,Eltwise_Binary,"/encoder/encoders.22/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.22/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
891,/encoder/encoders.22/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.22/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
892,/encoder/encoders.22/self_attn/Transpose_4,Transpose,"/encoder/encoders.22/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
893,/encoder/encoders.22/self_attn/MatMul,MatMul,"/encoder/encoders.22/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.22/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
894,/encoder/encoders.22/self_attn/Softmax,Softmax,"/encoder/encoders.22/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
895,/encoder/encoders.22/self_attn/MatMul_1,MatMul,"/encoder/encoders.22/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.22/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
896,/encoder/encoders.22/self_attn/Transpose_5,Transpose,"/encoder/encoders.22/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
897,/encoder/encoders.22/self_attn/Reshape_3,Reshape,"/encoder/encoders.22/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
898,/encoder/encoders.22/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.22/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.22.self_attn.linear_out.bias
,,,"onnx::MatMul_9528 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.22.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
899,/encoder/encoders.22/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.22/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
900,/encoder/encoders.22/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.22/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.22/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
901,/encoder/encoders.22/Add,Eltwise_Binary,"/encoder/encoders.21/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.22/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
902,/encoder/encoders.22/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.22/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9529 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9530 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
903,/encoder/encoders.22/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.22/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
904,/encoder/encoders.22/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.22/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.22/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.22.feed_forward.w_1.bias
,,,"onnx::MatMul_9531 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.22.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
905,/encoder/encoders.22/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.22/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.22/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
906,/encoder/encoders.22/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.22/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.22/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
907,/encoder/encoders.22/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.22/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.22/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
908,/encoder/encoders.22/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.22/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.22/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.22.feed_forward.w_2.bias
,,,"onnx::MatMul_9532 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.22.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
909,/encoder/encoders.22/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.22/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.22/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
910,/encoder/encoders.22/Add_1,Eltwise_Binary,"/encoder/encoders.22/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.22/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.22/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
911,/encoder/encoders.23/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.22/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9533 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9534 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
912,/encoder/encoders.23/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.23/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
913,/encoder/encoders.23/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.23/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.23.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9535 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.23.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
914,/encoder/encoders.23/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.23/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
915,/encoder/encoders.23/self_attn/Split,Split,"/encoder/encoders.23/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.23/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.23/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
916,/encoder/encoders.23/self_attn/Reshape,Reshape,"/encoder/encoders.23/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
917,/encoder/encoders.23/self_attn/Transpose,Transpose,"/encoder/encoders.23/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
918,/encoder/encoders.23/self_attn/Reshape_1,Reshape,"/encoder/encoders.23/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
919,/encoder/encoders.23/self_attn/Reshape_2,Reshape,"/encoder/encoders.23/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
920,/encoder/encoders.23/self_attn/Transpose_1,Transpose,"/encoder/encoders.23/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
921,/encoder/encoders.23/self_attn/Transpose_2,Transpose,"/encoder/encoders.23/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
922,/encoder/encoders.23/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.23/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
923,/encoder/encoders.23/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.23/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
924,/encoder/encoders.23/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.23/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.23.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
925,/encoder/encoders.23/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.23/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
926,/encoder/encoders.23/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.23/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
927,/encoder/encoders.23/self_attn/Transpose_3,Transpose,"/encoder/encoders.23/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
928,/encoder/encoders.23/self_attn/Add,Eltwise_Binary,"/encoder/encoders.23/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.23/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
929,/encoder/encoders.23/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.23/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
930,/encoder/encoders.23/self_attn/Transpose_4,Transpose,"/encoder/encoders.23/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
931,/encoder/encoders.23/self_attn/MatMul,MatMul,"/encoder/encoders.23/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.23/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
932,/encoder/encoders.23/self_attn/Softmax,Softmax,"/encoder/encoders.23/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
933,/encoder/encoders.23/self_attn/MatMul_1,MatMul,"/encoder/encoders.23/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.23/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
934,/encoder/encoders.23/self_attn/Transpose_5,Transpose,"/encoder/encoders.23/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
935,/encoder/encoders.23/self_attn/Reshape_3,Reshape,"/encoder/encoders.23/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
936,/encoder/encoders.23/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.23/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.23.self_attn.linear_out.bias
,,,"onnx::MatMul_9549 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.23.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
937,/encoder/encoders.23/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.23/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
938,/encoder/encoders.23/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.23/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.23/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
939,/encoder/encoders.23/Add,Eltwise_Binary,"/encoder/encoders.22/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.23/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
940,/encoder/encoders.23/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.23/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9550 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9551 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
941,/encoder/encoders.23/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.23/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
942,/encoder/encoders.23/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.23/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.23/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.23.feed_forward.w_1.bias
,,,"onnx::MatMul_9552 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.23.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
943,/encoder/encoders.23/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.23/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.23/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
944,/encoder/encoders.23/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.23/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.23/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
945,/encoder/encoders.23/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.23/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.23/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
946,/encoder/encoders.23/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.23/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.23/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.23.feed_forward.w_2.bias
,,,"onnx::MatMul_9553 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.23.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
947,/encoder/encoders.23/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.23/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.23/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
948,/encoder/encoders.23/Add_1,Eltwise_Binary,"/encoder/encoders.23/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.23/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.23/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
949,/encoder/encoders.24/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.23/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9554 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9555 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
950,/encoder/encoders.24/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.24/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
951,/encoder/encoders.24/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.24/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.24.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9556 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.24.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
952,/encoder/encoders.24/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.24/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
953,/encoder/encoders.24/self_attn/Split,Split,"/encoder/encoders.24/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.24/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.24/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
954,/encoder/encoders.24/self_attn/Reshape,Reshape,"/encoder/encoders.24/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
955,/encoder/encoders.24/self_attn/Transpose,Transpose,"/encoder/encoders.24/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
956,/encoder/encoders.24/self_attn/Reshape_1,Reshape,"/encoder/encoders.24/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
957,/encoder/encoders.24/self_attn/Reshape_2,Reshape,"/encoder/encoders.24/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
958,/encoder/encoders.24/self_attn/Transpose_1,Transpose,"/encoder/encoders.24/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
959,/encoder/encoders.24/self_attn/Transpose_2,Transpose,"/encoder/encoders.24/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
960,/encoder/encoders.24/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.24/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
961,/encoder/encoders.24/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.24/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
962,/encoder/encoders.24/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.24/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.24.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
963,/encoder/encoders.24/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.24/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
964,/encoder/encoders.24/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.24/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
965,/encoder/encoders.24/self_attn/Transpose_3,Transpose,"/encoder/encoders.24/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
966,/encoder/encoders.24/self_attn/Add,Eltwise_Binary,"/encoder/encoders.24/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.24/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
967,/encoder/encoders.24/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.24/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
968,/encoder/encoders.24/self_attn/Transpose_4,Transpose,"/encoder/encoders.24/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
969,/encoder/encoders.24/self_attn/MatMul,MatMul,"/encoder/encoders.24/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.24/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
970,/encoder/encoders.24/self_attn/Softmax,Softmax,"/encoder/encoders.24/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
971,/encoder/encoders.24/self_attn/MatMul_1,MatMul,"/encoder/encoders.24/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.24/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
972,/encoder/encoders.24/self_attn/Transpose_5,Transpose,"/encoder/encoders.24/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
973,/encoder/encoders.24/self_attn/Reshape_3,Reshape,"/encoder/encoders.24/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
974,/encoder/encoders.24/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.24/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.24.self_attn.linear_out.bias
,,,"onnx::MatMul_9570 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.24.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
975,/encoder/encoders.24/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.24/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
976,/encoder/encoders.24/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.24/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.24/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
977,/encoder/encoders.24/Add,Eltwise_Binary,"/encoder/encoders.23/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.24/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
978,/encoder/encoders.24/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.24/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9571 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9572 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
979,/encoder/encoders.24/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.24/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
980,/encoder/encoders.24/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.24/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.24/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.24.feed_forward.w_1.bias
,,,"onnx::MatMul_9573 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.24.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
981,/encoder/encoders.24/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.24/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.24/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
982,/encoder/encoders.24/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.24/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.24/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
983,/encoder/encoders.24/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.24/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.24/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
984,/encoder/encoders.24/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.24/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.24/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.24.feed_forward.w_2.bias
,,,"onnx::MatMul_9574 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.24.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
985,/encoder/encoders.24/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.24/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.24/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
986,/encoder/encoders.24/Add_1,Eltwise_Binary,"/encoder/encoders.24/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.24/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.24/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
987,/encoder/encoders.25/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.24/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9575 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9576 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
988,/encoder/encoders.25/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.25/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
989,/encoder/encoders.25/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.25/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.25.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9577 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.25.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
990,/encoder/encoders.25/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.25/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
991,/encoder/encoders.25/self_attn/Split,Split,"/encoder/encoders.25/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.25/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.25/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
992,/encoder/encoders.25/self_attn/Reshape,Reshape,"/encoder/encoders.25/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
993,/encoder/encoders.25/self_attn/Transpose,Transpose,"/encoder/encoders.25/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
994,/encoder/encoders.25/self_attn/Reshape_1,Reshape,"/encoder/encoders.25/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
995,/encoder/encoders.25/self_attn/Reshape_2,Reshape,"/encoder/encoders.25/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
996,/encoder/encoders.25/self_attn/Transpose_1,Transpose,"/encoder/encoders.25/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
997,/encoder/encoders.25/self_attn/Transpose_2,Transpose,"/encoder/encoders.25/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
998,/encoder/encoders.25/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.25/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
999,/encoder/encoders.25/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.25/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1000,/encoder/encoders.25/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.25/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.25.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1001,/encoder/encoders.25/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.25/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1002,/encoder/encoders.25/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.25/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1003,/encoder/encoders.25/self_attn/Transpose_3,Transpose,"/encoder/encoders.25/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1004,/encoder/encoders.25/self_attn/Add,Eltwise_Binary,"/encoder/encoders.25/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.25/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1005,/encoder/encoders.25/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.25/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1006,/encoder/encoders.25/self_attn/Transpose_4,Transpose,"/encoder/encoders.25/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1007,/encoder/encoders.25/self_attn/MatMul,MatMul,"/encoder/encoders.25/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.25/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1008,/encoder/encoders.25/self_attn/Softmax,Softmax,"/encoder/encoders.25/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1009,/encoder/encoders.25/self_attn/MatMul_1,MatMul,"/encoder/encoders.25/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.25/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1010,/encoder/encoders.25/self_attn/Transpose_5,Transpose,"/encoder/encoders.25/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1011,/encoder/encoders.25/self_attn/Reshape_3,Reshape,"/encoder/encoders.25/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1012,/encoder/encoders.25/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.25/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.25.self_attn.linear_out.bias
,,,"onnx::MatMul_9591 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.25.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1013,/encoder/encoders.25/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.25/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1014,/encoder/encoders.25/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.25/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.25/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1015,/encoder/encoders.25/Add,Eltwise_Binary,"/encoder/encoders.24/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.25/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1016,/encoder/encoders.25/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.25/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9592 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9593 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1017,/encoder/encoders.25/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.25/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1018,/encoder/encoders.25/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.25/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.25/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.25.feed_forward.w_1.bias
,,,"onnx::MatMul_9594 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.25.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1019,/encoder/encoders.25/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.25/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.25/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1020,/encoder/encoders.25/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.25/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.25/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1021,/encoder/encoders.25/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.25/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.25/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1022,/encoder/encoders.25/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.25/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.25/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.25.feed_forward.w_2.bias
,,,"onnx::MatMul_9595 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.25.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1023,/encoder/encoders.25/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.25/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.25/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1024,/encoder/encoders.25/Add_1,Eltwise_Binary,"/encoder/encoders.25/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.25/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.25/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1025,/encoder/encoders.26/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.25/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9596 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9597 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1026,/encoder/encoders.26/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.26/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1027,/encoder/encoders.26/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.26/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.26.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9598 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.26.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1028,/encoder/encoders.26/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.26/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1029,/encoder/encoders.26/self_attn/Split,Split,"/encoder/encoders.26/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.26/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.26/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1030,/encoder/encoders.26/self_attn/Reshape,Reshape,"/encoder/encoders.26/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1031,/encoder/encoders.26/self_attn/Transpose,Transpose,"/encoder/encoders.26/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1032,/encoder/encoders.26/self_attn/Reshape_1,Reshape,"/encoder/encoders.26/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1033,/encoder/encoders.26/self_attn/Reshape_2,Reshape,"/encoder/encoders.26/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1034,/encoder/encoders.26/self_attn/Transpose_1,Transpose,"/encoder/encoders.26/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1035,/encoder/encoders.26/self_attn/Transpose_2,Transpose,"/encoder/encoders.26/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1036,/encoder/encoders.26/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.26/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1037,/encoder/encoders.26/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.26/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1038,/encoder/encoders.26/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.26/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.26.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1039,/encoder/encoders.26/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.26/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1040,/encoder/encoders.26/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.26/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1041,/encoder/encoders.26/self_attn/Transpose_3,Transpose,"/encoder/encoders.26/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1042,/encoder/encoders.26/self_attn/Add,Eltwise_Binary,"/encoder/encoders.26/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.26/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1043,/encoder/encoders.26/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.26/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1044,/encoder/encoders.26/self_attn/Transpose_4,Transpose,"/encoder/encoders.26/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1045,/encoder/encoders.26/self_attn/MatMul,MatMul,"/encoder/encoders.26/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.26/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1046,/encoder/encoders.26/self_attn/Softmax,Softmax,"/encoder/encoders.26/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1047,/encoder/encoders.26/self_attn/MatMul_1,MatMul,"/encoder/encoders.26/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.26/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1048,/encoder/encoders.26/self_attn/Transpose_5,Transpose,"/encoder/encoders.26/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1049,/encoder/encoders.26/self_attn/Reshape_3,Reshape,"/encoder/encoders.26/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1050,/encoder/encoders.26/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.26/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.26.self_attn.linear_out.bias
,,,"onnx::MatMul_9612 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.26.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1051,/encoder/encoders.26/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.26/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1052,/encoder/encoders.26/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.26/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.26/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1053,/encoder/encoders.26/Add,Eltwise_Binary,"/encoder/encoders.25/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.26/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1054,/encoder/encoders.26/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.26/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9613 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9614 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1055,/encoder/encoders.26/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.26/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1056,/encoder/encoders.26/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.26/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.26/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.26.feed_forward.w_1.bias
,,,"onnx::MatMul_9615 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.26.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1057,/encoder/encoders.26/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.26/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.26/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1058,/encoder/encoders.26/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.26/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.26/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1059,/encoder/encoders.26/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.26/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.26/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1060,/encoder/encoders.26/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.26/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.26/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.26.feed_forward.w_2.bias
,,,"onnx::MatMul_9616 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.26.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1061,/encoder/encoders.26/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.26/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.26/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1062,/encoder/encoders.26/Add_1,Eltwise_Binary,"/encoder/encoders.26/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.26/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.26/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1063,/encoder/encoders.27/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.26/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9617 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9618 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1064,/encoder/encoders.27/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.27/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1065,/encoder/encoders.27/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.27/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.27.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9619 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.27.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1066,/encoder/encoders.27/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.27/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1067,/encoder/encoders.27/self_attn/Split,Split,"/encoder/encoders.27/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.27/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.27/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1068,/encoder/encoders.27/self_attn/Reshape,Reshape,"/encoder/encoders.27/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1069,/encoder/encoders.27/self_attn/Transpose,Transpose,"/encoder/encoders.27/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1070,/encoder/encoders.27/self_attn/Reshape_1,Reshape,"/encoder/encoders.27/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1071,/encoder/encoders.27/self_attn/Reshape_2,Reshape,"/encoder/encoders.27/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1072,/encoder/encoders.27/self_attn/Transpose_1,Transpose,"/encoder/encoders.27/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1073,/encoder/encoders.27/self_attn/Transpose_2,Transpose,"/encoder/encoders.27/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1074,/encoder/encoders.27/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.27/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1075,/encoder/encoders.27/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.27/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1076,/encoder/encoders.27/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.27/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.27.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1077,/encoder/encoders.27/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.27/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1078,/encoder/encoders.27/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.27/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1079,/encoder/encoders.27/self_attn/Transpose_3,Transpose,"/encoder/encoders.27/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1080,/encoder/encoders.27/self_attn/Add,Eltwise_Binary,"/encoder/encoders.27/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.27/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1081,/encoder/encoders.27/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.27/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1082,/encoder/encoders.27/self_attn/Transpose_4,Transpose,"/encoder/encoders.27/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1083,/encoder/encoders.27/self_attn/MatMul,MatMul,"/encoder/encoders.27/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.27/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1084,/encoder/encoders.27/self_attn/Softmax,Softmax,"/encoder/encoders.27/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1085,/encoder/encoders.27/self_attn/MatMul_1,MatMul,"/encoder/encoders.27/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.27/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1086,/encoder/encoders.27/self_attn/Transpose_5,Transpose,"/encoder/encoders.27/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1087,/encoder/encoders.27/self_attn/Reshape_3,Reshape,"/encoder/encoders.27/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1088,/encoder/encoders.27/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.27/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.27.self_attn.linear_out.bias
,,,"onnx::MatMul_9633 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.27.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1089,/encoder/encoders.27/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.27/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1090,/encoder/encoders.27/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.27/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.27/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1091,/encoder/encoders.27/Add,Eltwise_Binary,"/encoder/encoders.26/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.27/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1092,/encoder/encoders.27/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.27/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9634 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9635 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1093,/encoder/encoders.27/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.27/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1094,/encoder/encoders.27/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.27/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.27/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.27.feed_forward.w_1.bias
,,,"onnx::MatMul_9636 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.27.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1095,/encoder/encoders.27/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.27/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.27/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1096,/encoder/encoders.27/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.27/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.27/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1097,/encoder/encoders.27/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.27/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.27/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1098,/encoder/encoders.27/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.27/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.27/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.27.feed_forward.w_2.bias
,,,"onnx::MatMul_9637 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.27.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1099,/encoder/encoders.27/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.27/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.27/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1100,/encoder/encoders.27/Add_1,Eltwise_Binary,"/encoder/encoders.27/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.27/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.27/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1101,/encoder/encoders.28/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.27/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9638 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9639 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1102,/encoder/encoders.28/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.28/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1103,/encoder/encoders.28/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.28/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.28.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9640 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.28.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1104,/encoder/encoders.28/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.28/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1105,/encoder/encoders.28/self_attn/Split,Split,"/encoder/encoders.28/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.28/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.28/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1106,/encoder/encoders.28/self_attn/Reshape,Reshape,"/encoder/encoders.28/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1107,/encoder/encoders.28/self_attn/Transpose,Transpose,"/encoder/encoders.28/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1108,/encoder/encoders.28/self_attn/Reshape_1,Reshape,"/encoder/encoders.28/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1109,/encoder/encoders.28/self_attn/Reshape_2,Reshape,"/encoder/encoders.28/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1110,/encoder/encoders.28/self_attn/Transpose_1,Transpose,"/encoder/encoders.28/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1111,/encoder/encoders.28/self_attn/Transpose_2,Transpose,"/encoder/encoders.28/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1112,/encoder/encoders.28/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.28/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1113,/encoder/encoders.28/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.28/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1114,/encoder/encoders.28/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.28/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.28.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1115,/encoder/encoders.28/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.28/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1116,/encoder/encoders.28/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.28/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1117,/encoder/encoders.28/self_attn/Transpose_3,Transpose,"/encoder/encoders.28/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1118,/encoder/encoders.28/self_attn/Add,Eltwise_Binary,"/encoder/encoders.28/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.28/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1119,/encoder/encoders.28/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.28/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1120,/encoder/encoders.28/self_attn/Transpose_4,Transpose,"/encoder/encoders.28/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1121,/encoder/encoders.28/self_attn/MatMul,MatMul,"/encoder/encoders.28/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.28/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1122,/encoder/encoders.28/self_attn/Softmax,Softmax,"/encoder/encoders.28/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1123,/encoder/encoders.28/self_attn/MatMul_1,MatMul,"/encoder/encoders.28/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.28/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1124,/encoder/encoders.28/self_attn/Transpose_5,Transpose,"/encoder/encoders.28/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1125,/encoder/encoders.28/self_attn/Reshape_3,Reshape,"/encoder/encoders.28/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1126,/encoder/encoders.28/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.28/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.28.self_attn.linear_out.bias
,,,"onnx::MatMul_9654 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.28.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1127,/encoder/encoders.28/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.28/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1128,/encoder/encoders.28/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.28/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.28/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1129,/encoder/encoders.28/Add,Eltwise_Binary,"/encoder/encoders.27/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.28/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1130,/encoder/encoders.28/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.28/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9655 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9656 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1131,/encoder/encoders.28/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.28/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1132,/encoder/encoders.28/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.28/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.28/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.28.feed_forward.w_1.bias
,,,"onnx::MatMul_9657 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.28.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1133,/encoder/encoders.28/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.28/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.28/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1134,/encoder/encoders.28/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.28/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.28/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1135,/encoder/encoders.28/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.28/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.28/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1136,/encoder/encoders.28/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.28/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.28/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.28.feed_forward.w_2.bias
,,,"onnx::MatMul_9658 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.28.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1137,/encoder/encoders.28/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.28/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.28/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1138,/encoder/encoders.28/Add_1,Eltwise_Binary,"/encoder/encoders.28/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.28/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.28/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1139,/encoder/encoders.29/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.28/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9659 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9660 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1140,/encoder/encoders.29/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.29/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1141,/encoder/encoders.29/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.29/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.29.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9661 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.29.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1142,/encoder/encoders.29/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.29/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1143,/encoder/encoders.29/self_attn/Split,Split,"/encoder/encoders.29/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.29/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.29/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1144,/encoder/encoders.29/self_attn/Reshape,Reshape,"/encoder/encoders.29/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1145,/encoder/encoders.29/self_attn/Transpose,Transpose,"/encoder/encoders.29/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1146,/encoder/encoders.29/self_attn/Reshape_1,Reshape,"/encoder/encoders.29/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1147,/encoder/encoders.29/self_attn/Reshape_2,Reshape,"/encoder/encoders.29/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1148,/encoder/encoders.29/self_attn/Transpose_1,Transpose,"/encoder/encoders.29/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1149,/encoder/encoders.29/self_attn/Transpose_2,Transpose,"/encoder/encoders.29/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1150,/encoder/encoders.29/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.29/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1151,/encoder/encoders.29/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.29/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1152,/encoder/encoders.29/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.29/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.29.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1153,/encoder/encoders.29/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.29/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1154,/encoder/encoders.29/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.29/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1155,/encoder/encoders.29/self_attn/Transpose_3,Transpose,"/encoder/encoders.29/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1156,/encoder/encoders.29/self_attn/Add,Eltwise_Binary,"/encoder/encoders.29/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.29/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1157,/encoder/encoders.29/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.29/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1158,/encoder/encoders.29/self_attn/Transpose_4,Transpose,"/encoder/encoders.29/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1159,/encoder/encoders.29/self_attn/MatMul,MatMul,"/encoder/encoders.29/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.29/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1160,/encoder/encoders.29/self_attn/Softmax,Softmax,"/encoder/encoders.29/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1161,/encoder/encoders.29/self_attn/MatMul_1,MatMul,"/encoder/encoders.29/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.29/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1162,/encoder/encoders.29/self_attn/Transpose_5,Transpose,"/encoder/encoders.29/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1163,/encoder/encoders.29/self_attn/Reshape_3,Reshape,"/encoder/encoders.29/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1164,/encoder/encoders.29/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.29/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.29.self_attn.linear_out.bias
,,,"onnx::MatMul_9675 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.29.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1165,/encoder/encoders.29/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.29/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1166,/encoder/encoders.29/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.29/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.29/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1167,/encoder/encoders.29/Add,Eltwise_Binary,"/encoder/encoders.28/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.29/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1168,/encoder/encoders.29/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.29/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9676 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9677 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1169,/encoder/encoders.29/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.29/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1170,/encoder/encoders.29/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.29/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.29/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.29.feed_forward.w_1.bias
,,,"onnx::MatMul_9678 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.29.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1171,/encoder/encoders.29/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.29/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.29/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1172,/encoder/encoders.29/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.29/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.29/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1173,/encoder/encoders.29/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.29/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.29/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1174,/encoder/encoders.29/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.29/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.29/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.29.feed_forward.w_2.bias
,,,"onnx::MatMul_9679 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.29.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1175,/encoder/encoders.29/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.29/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.29/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1176,/encoder/encoders.29/Add_1,Eltwise_Binary,"/encoder/encoders.29/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.29/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.29/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1177,/encoder/encoders.30/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.29/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9680 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9681 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1178,/encoder/encoders.30/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.30/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1179,/encoder/encoders.30/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.30/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.30.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9682 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.30.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1180,/encoder/encoders.30/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.30/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1181,/encoder/encoders.30/self_attn/Split,Split,"/encoder/encoders.30/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.30/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.30/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1182,/encoder/encoders.30/self_attn/Reshape,Reshape,"/encoder/encoders.30/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1183,/encoder/encoders.30/self_attn/Transpose,Transpose,"/encoder/encoders.30/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1184,/encoder/encoders.30/self_attn/Reshape_1,Reshape,"/encoder/encoders.30/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1185,/encoder/encoders.30/self_attn/Reshape_2,Reshape,"/encoder/encoders.30/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1186,/encoder/encoders.30/self_attn/Transpose_1,Transpose,"/encoder/encoders.30/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1187,/encoder/encoders.30/self_attn/Transpose_2,Transpose,"/encoder/encoders.30/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1188,/encoder/encoders.30/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.30/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1189,/encoder/encoders.30/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.30/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1190,/encoder/encoders.30/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.30/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.30.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1191,/encoder/encoders.30/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.30/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1192,/encoder/encoders.30/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.30/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1193,/encoder/encoders.30/self_attn/Transpose_3,Transpose,"/encoder/encoders.30/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1194,/encoder/encoders.30/self_attn/Add,Eltwise_Binary,"/encoder/encoders.30/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.30/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1195,/encoder/encoders.30/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.30/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1196,/encoder/encoders.30/self_attn/Transpose_4,Transpose,"/encoder/encoders.30/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1197,/encoder/encoders.30/self_attn/MatMul,MatMul,"/encoder/encoders.30/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.30/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1198,/encoder/encoders.30/self_attn/Softmax,Softmax,"/encoder/encoders.30/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1199,/encoder/encoders.30/self_attn/MatMul_1,MatMul,"/encoder/encoders.30/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.30/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1200,/encoder/encoders.30/self_attn/Transpose_5,Transpose,"/encoder/encoders.30/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1201,/encoder/encoders.30/self_attn/Reshape_3,Reshape,"/encoder/encoders.30/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1202,/encoder/encoders.30/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.30/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.30.self_attn.linear_out.bias
,,,"onnx::MatMul_9696 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.30.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1203,/encoder/encoders.30/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.30/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1204,/encoder/encoders.30/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.30/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.30/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1205,/encoder/encoders.30/Add,Eltwise_Binary,"/encoder/encoders.29/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.30/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1206,/encoder/encoders.30/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.30/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9697 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9698 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1207,/encoder/encoders.30/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.30/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1208,/encoder/encoders.30/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.30/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.30/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.30.feed_forward.w_1.bias
,,,"onnx::MatMul_9699 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.30.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1209,/encoder/encoders.30/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.30/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.30/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1210,/encoder/encoders.30/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.30/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.30/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1211,/encoder/encoders.30/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.30/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.30/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1212,/encoder/encoders.30/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.30/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.30/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.30.feed_forward.w_2.bias
,,,"onnx::MatMul_9700 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.30.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1213,/encoder/encoders.30/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.30/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.30/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1214,/encoder/encoders.30/Add_1,Eltwise_Binary,"/encoder/encoders.30/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.30/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.30/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1215,/encoder/encoders.31/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.30/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9701 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9702 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1216,/encoder/encoders.31/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.31/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1217,/encoder/encoders.31/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.31/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.31.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9703 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.31.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1218,/encoder/encoders.31/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.31/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1219,/encoder/encoders.31/self_attn/Split,Split,"/encoder/encoders.31/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.31/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.31/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1220,/encoder/encoders.31/self_attn/Reshape,Reshape,"/encoder/encoders.31/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1221,/encoder/encoders.31/self_attn/Transpose,Transpose,"/encoder/encoders.31/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1222,/encoder/encoders.31/self_attn/Reshape_1,Reshape,"/encoder/encoders.31/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1223,/encoder/encoders.31/self_attn/Reshape_2,Reshape,"/encoder/encoders.31/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1224,/encoder/encoders.31/self_attn/Transpose_1,Transpose,"/encoder/encoders.31/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1225,/encoder/encoders.31/self_attn/Transpose_2,Transpose,"/encoder/encoders.31/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1226,/encoder/encoders.31/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.31/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1227,/encoder/encoders.31/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.31/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1228,/encoder/encoders.31/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.31/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.31.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1229,/encoder/encoders.31/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.31/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1230,/encoder/encoders.31/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.31/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1231,/encoder/encoders.31/self_attn/Transpose_3,Transpose,"/encoder/encoders.31/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1232,/encoder/encoders.31/self_attn/Add,Eltwise_Binary,"/encoder/encoders.31/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.31/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1233,/encoder/encoders.31/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.31/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1234,/encoder/encoders.31/self_attn/Transpose_4,Transpose,"/encoder/encoders.31/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1235,/encoder/encoders.31/self_attn/MatMul,MatMul,"/encoder/encoders.31/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.31/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1236,/encoder/encoders.31/self_attn/Softmax,Softmax,"/encoder/encoders.31/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1237,/encoder/encoders.31/self_attn/MatMul_1,MatMul,"/encoder/encoders.31/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.31/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1238,/encoder/encoders.31/self_attn/Transpose_5,Transpose,"/encoder/encoders.31/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1239,/encoder/encoders.31/self_attn/Reshape_3,Reshape,"/encoder/encoders.31/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1240,/encoder/encoders.31/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.31/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.31.self_attn.linear_out.bias
,,,"onnx::MatMul_9717 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.31.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1241,/encoder/encoders.31/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.31/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1242,/encoder/encoders.31/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.31/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.31/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1243,/encoder/encoders.31/Add,Eltwise_Binary,"/encoder/encoders.30/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.31/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1244,/encoder/encoders.31/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.31/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9718 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9719 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1245,/encoder/encoders.31/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.31/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1246,/encoder/encoders.31/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.31/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.31/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.31.feed_forward.w_1.bias
,,,"onnx::MatMul_9720 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.31.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1247,/encoder/encoders.31/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.31/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.31/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1248,/encoder/encoders.31/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.31/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.31/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1249,/encoder/encoders.31/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.31/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.31/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1250,/encoder/encoders.31/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.31/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.31/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.31.feed_forward.w_2.bias
,,,"onnx::MatMul_9721 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.31.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1251,/encoder/encoders.31/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.31/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.31/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1252,/encoder/encoders.31/Add_1,Eltwise_Binary,"/encoder/encoders.31/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.31/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.31/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1253,/encoder/encoders.32/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.31/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9722 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9723 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1254,/encoder/encoders.32/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.32/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1255,/encoder/encoders.32/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.32/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.32.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9724 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.32.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1256,/encoder/encoders.32/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.32/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1257,/encoder/encoders.32/self_attn/Split,Split,"/encoder/encoders.32/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.32/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.32/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1258,/encoder/encoders.32/self_attn/Reshape,Reshape,"/encoder/encoders.32/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1259,/encoder/encoders.32/self_attn/Transpose,Transpose,"/encoder/encoders.32/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1260,/encoder/encoders.32/self_attn/Reshape_1,Reshape,"/encoder/encoders.32/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1261,/encoder/encoders.32/self_attn/Reshape_2,Reshape,"/encoder/encoders.32/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1262,/encoder/encoders.32/self_attn/Transpose_1,Transpose,"/encoder/encoders.32/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1263,/encoder/encoders.32/self_attn/Transpose_2,Transpose,"/encoder/encoders.32/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1264,/encoder/encoders.32/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.32/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1265,/encoder/encoders.32/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.32/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1266,/encoder/encoders.32/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.32/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.32.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1267,/encoder/encoders.32/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.32/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1268,/encoder/encoders.32/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.32/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1269,/encoder/encoders.32/self_attn/Transpose_3,Transpose,"/encoder/encoders.32/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1270,/encoder/encoders.32/self_attn/Add,Eltwise_Binary,"/encoder/encoders.32/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.32/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1271,/encoder/encoders.32/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.32/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1272,/encoder/encoders.32/self_attn/Transpose_4,Transpose,"/encoder/encoders.32/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1273,/encoder/encoders.32/self_attn/MatMul,MatMul,"/encoder/encoders.32/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.32/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1274,/encoder/encoders.32/self_attn/Softmax,Softmax,"/encoder/encoders.32/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1275,/encoder/encoders.32/self_attn/MatMul_1,MatMul,"/encoder/encoders.32/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.32/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1276,/encoder/encoders.32/self_attn/Transpose_5,Transpose,"/encoder/encoders.32/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1277,/encoder/encoders.32/self_attn/Reshape_3,Reshape,"/encoder/encoders.32/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1278,/encoder/encoders.32/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.32/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.32.self_attn.linear_out.bias
,,,"onnx::MatMul_9738 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.32.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1279,/encoder/encoders.32/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.32/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1280,/encoder/encoders.32/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.32/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.32/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1281,/encoder/encoders.32/Add,Eltwise_Binary,"/encoder/encoders.31/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.32/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1282,/encoder/encoders.32/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.32/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9739 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9740 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1283,/encoder/encoders.32/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.32/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1284,/encoder/encoders.32/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.32/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.32/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.32.feed_forward.w_1.bias
,,,"onnx::MatMul_9741 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.32.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1285,/encoder/encoders.32/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.32/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.32/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1286,/encoder/encoders.32/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.32/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.32/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1287,/encoder/encoders.32/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.32/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.32/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1288,/encoder/encoders.32/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.32/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.32/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.32.feed_forward.w_2.bias
,,,"onnx::MatMul_9742 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.32.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1289,/encoder/encoders.32/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.32/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.32/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1290,/encoder/encoders.32/Add_1,Eltwise_Binary,"/encoder/encoders.32/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.32/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.32/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1291,/encoder/encoders.33/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.32/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9743 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9744 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1292,/encoder/encoders.33/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.33/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1293,/encoder/encoders.33/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.33/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.33.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9745 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.33.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1294,/encoder/encoders.33/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.33/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1295,/encoder/encoders.33/self_attn/Split,Split,"/encoder/encoders.33/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.33/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.33/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1296,/encoder/encoders.33/self_attn/Reshape,Reshape,"/encoder/encoders.33/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1297,/encoder/encoders.33/self_attn/Transpose,Transpose,"/encoder/encoders.33/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1298,/encoder/encoders.33/self_attn/Reshape_1,Reshape,"/encoder/encoders.33/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1299,/encoder/encoders.33/self_attn/Reshape_2,Reshape,"/encoder/encoders.33/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1300,/encoder/encoders.33/self_attn/Transpose_1,Transpose,"/encoder/encoders.33/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1301,/encoder/encoders.33/self_attn/Transpose_2,Transpose,"/encoder/encoders.33/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1302,/encoder/encoders.33/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.33/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1303,/encoder/encoders.33/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.33/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1304,/encoder/encoders.33/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.33/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.33.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1305,/encoder/encoders.33/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.33/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1306,/encoder/encoders.33/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.33/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1307,/encoder/encoders.33/self_attn/Transpose_3,Transpose,"/encoder/encoders.33/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1308,/encoder/encoders.33/self_attn/Add,Eltwise_Binary,"/encoder/encoders.33/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.33/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1309,/encoder/encoders.33/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.33/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1310,/encoder/encoders.33/self_attn/Transpose_4,Transpose,"/encoder/encoders.33/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1311,/encoder/encoders.33/self_attn/MatMul,MatMul,"/encoder/encoders.33/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.33/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1312,/encoder/encoders.33/self_attn/Softmax,Softmax,"/encoder/encoders.33/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1313,/encoder/encoders.33/self_attn/MatMul_1,MatMul,"/encoder/encoders.33/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.33/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1314,/encoder/encoders.33/self_attn/Transpose_5,Transpose,"/encoder/encoders.33/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1315,/encoder/encoders.33/self_attn/Reshape_3,Reshape,"/encoder/encoders.33/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1316,/encoder/encoders.33/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.33/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.33.self_attn.linear_out.bias
,,,"onnx::MatMul_9759 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.33.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1317,/encoder/encoders.33/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.33/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1318,/encoder/encoders.33/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.33/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.33/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1319,/encoder/encoders.33/Add,Eltwise_Binary,"/encoder/encoders.32/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.33/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1320,/encoder/encoders.33/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.33/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9760 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9761 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1321,/encoder/encoders.33/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.33/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1322,/encoder/encoders.33/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.33/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.33/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.33.feed_forward.w_1.bias
,,,"onnx::MatMul_9762 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.33.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1323,/encoder/encoders.33/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.33/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.33/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1324,/encoder/encoders.33/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.33/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.33/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1325,/encoder/encoders.33/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.33/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.33/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1326,/encoder/encoders.33/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.33/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.33/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.33.feed_forward.w_2.bias
,,,"onnx::MatMul_9763 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.33.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1327,/encoder/encoders.33/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.33/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.33/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1328,/encoder/encoders.33/Add_1,Eltwise_Binary,"/encoder/encoders.33/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.33/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.33/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1329,/encoder/encoders.34/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.33/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9764 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9765 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1330,/encoder/encoders.34/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.34/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1331,/encoder/encoders.34/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.34/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.34.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9766 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.34.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1332,/encoder/encoders.34/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.34/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1333,/encoder/encoders.34/self_attn/Split,Split,"/encoder/encoders.34/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.34/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.34/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1334,/encoder/encoders.34/self_attn/Reshape,Reshape,"/encoder/encoders.34/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1335,/encoder/encoders.34/self_attn/Transpose,Transpose,"/encoder/encoders.34/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1336,/encoder/encoders.34/self_attn/Reshape_1,Reshape,"/encoder/encoders.34/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1337,/encoder/encoders.34/self_attn/Reshape_2,Reshape,"/encoder/encoders.34/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1338,/encoder/encoders.34/self_attn/Transpose_1,Transpose,"/encoder/encoders.34/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1339,/encoder/encoders.34/self_attn/Transpose_2,Transpose,"/encoder/encoders.34/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1340,/encoder/encoders.34/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.34/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1341,/encoder/encoders.34/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.34/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1342,/encoder/encoders.34/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.34/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.34.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1343,/encoder/encoders.34/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.34/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1344,/encoder/encoders.34/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.34/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1345,/encoder/encoders.34/self_attn/Transpose_3,Transpose,"/encoder/encoders.34/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1346,/encoder/encoders.34/self_attn/Add,Eltwise_Binary,"/encoder/encoders.34/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.34/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1347,/encoder/encoders.34/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.34/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1348,/encoder/encoders.34/self_attn/Transpose_4,Transpose,"/encoder/encoders.34/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1349,/encoder/encoders.34/self_attn/MatMul,MatMul,"/encoder/encoders.34/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.34/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1350,/encoder/encoders.34/self_attn/Softmax,Softmax,"/encoder/encoders.34/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1351,/encoder/encoders.34/self_attn/MatMul_1,MatMul,"/encoder/encoders.34/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.34/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1352,/encoder/encoders.34/self_attn/Transpose_5,Transpose,"/encoder/encoders.34/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1353,/encoder/encoders.34/self_attn/Reshape_3,Reshape,"/encoder/encoders.34/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1354,/encoder/encoders.34/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.34/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.34.self_attn.linear_out.bias
,,,"onnx::MatMul_9780 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.34.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1355,/encoder/encoders.34/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.34/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1356,/encoder/encoders.34/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.34/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.34/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1357,/encoder/encoders.34/Add,Eltwise_Binary,"/encoder/encoders.33/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.34/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1358,/encoder/encoders.34/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.34/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9781 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9782 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1359,/encoder/encoders.34/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.34/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1360,/encoder/encoders.34/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.34/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.34/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.34.feed_forward.w_1.bias
,,,"onnx::MatMul_9783 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.34.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1361,/encoder/encoders.34/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.34/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.34/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1362,/encoder/encoders.34/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.34/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.34/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1363,/encoder/encoders.34/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.34/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.34/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1364,/encoder/encoders.34/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.34/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.34/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.34.feed_forward.w_2.bias
,,,"onnx::MatMul_9784 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.34.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1365,/encoder/encoders.34/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.34/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.34/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1366,/encoder/encoders.34/Add_1,Eltwise_Binary,"/encoder/encoders.34/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.34/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.34/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1367,/encoder/encoders.35/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.34/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9785 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9786 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1368,/encoder/encoders.35/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.35/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1369,/encoder/encoders.35/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.35/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.35.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9787 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.35.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1370,/encoder/encoders.35/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.35/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1371,/encoder/encoders.35/self_attn/Split,Split,"/encoder/encoders.35/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.35/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.35/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1372,/encoder/encoders.35/self_attn/Reshape,Reshape,"/encoder/encoders.35/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1373,/encoder/encoders.35/self_attn/Transpose,Transpose,"/encoder/encoders.35/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1374,/encoder/encoders.35/self_attn/Reshape_1,Reshape,"/encoder/encoders.35/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1375,/encoder/encoders.35/self_attn/Reshape_2,Reshape,"/encoder/encoders.35/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1376,/encoder/encoders.35/self_attn/Transpose_1,Transpose,"/encoder/encoders.35/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1377,/encoder/encoders.35/self_attn/Transpose_2,Transpose,"/encoder/encoders.35/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1378,/encoder/encoders.35/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.35/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1379,/encoder/encoders.35/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.35/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1380,/encoder/encoders.35/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.35/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.35.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1381,/encoder/encoders.35/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.35/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1382,/encoder/encoders.35/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.35/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1383,/encoder/encoders.35/self_attn/Transpose_3,Transpose,"/encoder/encoders.35/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1384,/encoder/encoders.35/self_attn/Add,Eltwise_Binary,"/encoder/encoders.35/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.35/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1385,/encoder/encoders.35/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.35/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1386,/encoder/encoders.35/self_attn/Transpose_4,Transpose,"/encoder/encoders.35/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1387,/encoder/encoders.35/self_attn/MatMul,MatMul,"/encoder/encoders.35/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.35/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1388,/encoder/encoders.35/self_attn/Softmax,Softmax,"/encoder/encoders.35/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1389,/encoder/encoders.35/self_attn/MatMul_1,MatMul,"/encoder/encoders.35/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.35/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1390,/encoder/encoders.35/self_attn/Transpose_5,Transpose,"/encoder/encoders.35/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1391,/encoder/encoders.35/self_attn/Reshape_3,Reshape,"/encoder/encoders.35/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1392,/encoder/encoders.35/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.35/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.35.self_attn.linear_out.bias
,,,"onnx::MatMul_9801 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.35.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1393,/encoder/encoders.35/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.35/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1394,/encoder/encoders.35/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.35/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.35/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1395,/encoder/encoders.35/Add,Eltwise_Binary,"/encoder/encoders.34/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.35/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1396,/encoder/encoders.35/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.35/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9802 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9803 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1397,/encoder/encoders.35/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.35/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1398,/encoder/encoders.35/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.35/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.35/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.35.feed_forward.w_1.bias
,,,"onnx::MatMul_9804 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.35.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1399,/encoder/encoders.35/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.35/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.35/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1400,/encoder/encoders.35/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.35/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.35/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1401,/encoder/encoders.35/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.35/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.35/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1402,/encoder/encoders.35/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.35/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.35/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.35.feed_forward.w_2.bias
,,,"onnx::MatMul_9805 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.35.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1403,/encoder/encoders.35/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.35/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.35/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1404,/encoder/encoders.35/Add_1,Eltwise_Binary,"/encoder/encoders.35/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.35/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.35/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1405,/encoder/encoders.36/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.35/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9806 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9807 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1406,/encoder/encoders.36/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.36/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1407,/encoder/encoders.36/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.36/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.36.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9808 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.36.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1408,/encoder/encoders.36/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.36/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1409,/encoder/encoders.36/self_attn/Split,Split,"/encoder/encoders.36/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.36/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.36/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1410,/encoder/encoders.36/self_attn/Reshape,Reshape,"/encoder/encoders.36/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1411,/encoder/encoders.36/self_attn/Transpose,Transpose,"/encoder/encoders.36/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1412,/encoder/encoders.36/self_attn/Reshape_1,Reshape,"/encoder/encoders.36/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1413,/encoder/encoders.36/self_attn/Reshape_2,Reshape,"/encoder/encoders.36/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1414,/encoder/encoders.36/self_attn/Transpose_1,Transpose,"/encoder/encoders.36/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1415,/encoder/encoders.36/self_attn/Transpose_2,Transpose,"/encoder/encoders.36/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1416,/encoder/encoders.36/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.36/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1417,/encoder/encoders.36/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.36/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1418,/encoder/encoders.36/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.36/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.36.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1419,/encoder/encoders.36/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.36/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1420,/encoder/encoders.36/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.36/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1421,/encoder/encoders.36/self_attn/Transpose_3,Transpose,"/encoder/encoders.36/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1422,/encoder/encoders.36/self_attn/Add,Eltwise_Binary,"/encoder/encoders.36/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.36/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1423,/encoder/encoders.36/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.36/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1424,/encoder/encoders.36/self_attn/Transpose_4,Transpose,"/encoder/encoders.36/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1425,/encoder/encoders.36/self_attn/MatMul,MatMul,"/encoder/encoders.36/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.36/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1426,/encoder/encoders.36/self_attn/Softmax,Softmax,"/encoder/encoders.36/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1427,/encoder/encoders.36/self_attn/MatMul_1,MatMul,"/encoder/encoders.36/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.36/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1428,/encoder/encoders.36/self_attn/Transpose_5,Transpose,"/encoder/encoders.36/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1429,/encoder/encoders.36/self_attn/Reshape_3,Reshape,"/encoder/encoders.36/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1430,/encoder/encoders.36/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.36/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.36.self_attn.linear_out.bias
,,,"onnx::MatMul_9822 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.36.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1431,/encoder/encoders.36/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.36/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1432,/encoder/encoders.36/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.36/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.36/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1433,/encoder/encoders.36/Add,Eltwise_Binary,"/encoder/encoders.35/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.36/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1434,/encoder/encoders.36/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.36/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9823 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9824 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1435,/encoder/encoders.36/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.36/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1436,/encoder/encoders.36/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.36/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.36/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.36.feed_forward.w_1.bias
,,,"onnx::MatMul_9825 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.36.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1437,/encoder/encoders.36/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.36/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.36/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1438,/encoder/encoders.36/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.36/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.36/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1439,/encoder/encoders.36/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.36/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.36/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1440,/encoder/encoders.36/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.36/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.36/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.36.feed_forward.w_2.bias
,,,"onnx::MatMul_9826 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.36.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1441,/encoder/encoders.36/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.36/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.36/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1442,/encoder/encoders.36/Add_1,Eltwise_Binary,"/encoder/encoders.36/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.36/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.36/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1443,/encoder/encoders.37/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.36/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9827 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9828 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1444,/encoder/encoders.37/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.37/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1445,/encoder/encoders.37/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.37/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.37.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9829 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.37.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1446,/encoder/encoders.37/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.37/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1447,/encoder/encoders.37/self_attn/Split,Split,"/encoder/encoders.37/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.37/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.37/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1448,/encoder/encoders.37/self_attn/Reshape,Reshape,"/encoder/encoders.37/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1449,/encoder/encoders.37/self_attn/Transpose,Transpose,"/encoder/encoders.37/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1450,/encoder/encoders.37/self_attn/Reshape_1,Reshape,"/encoder/encoders.37/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1451,/encoder/encoders.37/self_attn/Reshape_2,Reshape,"/encoder/encoders.37/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1452,/encoder/encoders.37/self_attn/Transpose_1,Transpose,"/encoder/encoders.37/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1453,/encoder/encoders.37/self_attn/Transpose_2,Transpose,"/encoder/encoders.37/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1454,/encoder/encoders.37/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.37/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1455,/encoder/encoders.37/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.37/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1456,/encoder/encoders.37/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.37/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.37.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1457,/encoder/encoders.37/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.37/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1458,/encoder/encoders.37/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.37/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1459,/encoder/encoders.37/self_attn/Transpose_3,Transpose,"/encoder/encoders.37/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1460,/encoder/encoders.37/self_attn/Add,Eltwise_Binary,"/encoder/encoders.37/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.37/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1461,/encoder/encoders.37/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.37/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1462,/encoder/encoders.37/self_attn/Transpose_4,Transpose,"/encoder/encoders.37/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1463,/encoder/encoders.37/self_attn/MatMul,MatMul,"/encoder/encoders.37/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.37/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1464,/encoder/encoders.37/self_attn/Softmax,Softmax,"/encoder/encoders.37/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1465,/encoder/encoders.37/self_attn/MatMul_1,MatMul,"/encoder/encoders.37/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.37/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1466,/encoder/encoders.37/self_attn/Transpose_5,Transpose,"/encoder/encoders.37/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1467,/encoder/encoders.37/self_attn/Reshape_3,Reshape,"/encoder/encoders.37/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1468,/encoder/encoders.37/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.37/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.37.self_attn.linear_out.bias
,,,"onnx::MatMul_9843 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.37.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1469,/encoder/encoders.37/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.37/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1470,/encoder/encoders.37/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.37/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.37/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1471,/encoder/encoders.37/Add,Eltwise_Binary,"/encoder/encoders.36/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.37/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1472,/encoder/encoders.37/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.37/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9844 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9845 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1473,/encoder/encoders.37/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.37/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1474,/encoder/encoders.37/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.37/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.37/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.37.feed_forward.w_1.bias
,,,"onnx::MatMul_9846 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.37.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1475,/encoder/encoders.37/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.37/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.37/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1476,/encoder/encoders.37/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.37/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.37/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1477,/encoder/encoders.37/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.37/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.37/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1478,/encoder/encoders.37/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.37/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.37/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.37.feed_forward.w_2.bias
,,,"onnx::MatMul_9847 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.37.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1479,/encoder/encoders.37/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.37/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.37/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1480,/encoder/encoders.37/Add_1,Eltwise_Binary,"/encoder/encoders.37/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.37/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.37/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1481,/encoder/encoders.38/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.37/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9848 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9849 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1482,/encoder/encoders.38/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.38/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1483,/encoder/encoders.38/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.38/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.38.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9850 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.38.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1484,/encoder/encoders.38/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.38/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1485,/encoder/encoders.38/self_attn/Split,Split,"/encoder/encoders.38/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.38/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.38/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1486,/encoder/encoders.38/self_attn/Reshape,Reshape,"/encoder/encoders.38/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1487,/encoder/encoders.38/self_attn/Transpose,Transpose,"/encoder/encoders.38/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1488,/encoder/encoders.38/self_attn/Reshape_1,Reshape,"/encoder/encoders.38/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1489,/encoder/encoders.38/self_attn/Reshape_2,Reshape,"/encoder/encoders.38/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1490,/encoder/encoders.38/self_attn/Transpose_1,Transpose,"/encoder/encoders.38/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1491,/encoder/encoders.38/self_attn/Transpose_2,Transpose,"/encoder/encoders.38/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1492,/encoder/encoders.38/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.38/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1493,/encoder/encoders.38/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.38/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1494,/encoder/encoders.38/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.38/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.38.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1495,/encoder/encoders.38/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.38/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1496,/encoder/encoders.38/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.38/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1497,/encoder/encoders.38/self_attn/Transpose_3,Transpose,"/encoder/encoders.38/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1498,/encoder/encoders.38/self_attn/Add,Eltwise_Binary,"/encoder/encoders.38/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.38/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1499,/encoder/encoders.38/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.38/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1500,/encoder/encoders.38/self_attn/Transpose_4,Transpose,"/encoder/encoders.38/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1501,/encoder/encoders.38/self_attn/MatMul,MatMul,"/encoder/encoders.38/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.38/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1502,/encoder/encoders.38/self_attn/Softmax,Softmax,"/encoder/encoders.38/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1503,/encoder/encoders.38/self_attn/MatMul_1,MatMul,"/encoder/encoders.38/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.38/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1504,/encoder/encoders.38/self_attn/Transpose_5,Transpose,"/encoder/encoders.38/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1505,/encoder/encoders.38/self_attn/Reshape_3,Reshape,"/encoder/encoders.38/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1506,/encoder/encoders.38/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.38/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.38.self_attn.linear_out.bias
,,,"onnx::MatMul_9864 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.38.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1507,/encoder/encoders.38/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.38/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1508,/encoder/encoders.38/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.38/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.38/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1509,/encoder/encoders.38/Add,Eltwise_Binary,"/encoder/encoders.37/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.38/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1510,/encoder/encoders.38/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.38/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9865 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9866 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1511,/encoder/encoders.38/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.38/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1512,/encoder/encoders.38/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.38/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.38/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.38.feed_forward.w_1.bias
,,,"onnx::MatMul_9867 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.38.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1513,/encoder/encoders.38/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.38/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.38/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1514,/encoder/encoders.38/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.38/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.38/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1515,/encoder/encoders.38/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.38/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.38/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1516,/encoder/encoders.38/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.38/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.38/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.38.feed_forward.w_2.bias
,,,"onnx::MatMul_9868 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.38.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1517,/encoder/encoders.38/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.38/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.38/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1518,/encoder/encoders.38/Add_1,Eltwise_Binary,"/encoder/encoders.38/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.38/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.38/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1519,/encoder/encoders.39/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.38/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9869 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9870 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1520,/encoder/encoders.39/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.39/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1521,/encoder/encoders.39/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.39/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.39.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9871 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.39.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1522,/encoder/encoders.39/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.39/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1523,/encoder/encoders.39/self_attn/Split,Split,"/encoder/encoders.39/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.39/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.39/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1524,/encoder/encoders.39/self_attn/Reshape,Reshape,"/encoder/encoders.39/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1525,/encoder/encoders.39/self_attn/Transpose,Transpose,"/encoder/encoders.39/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1526,/encoder/encoders.39/self_attn/Reshape_1,Reshape,"/encoder/encoders.39/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1527,/encoder/encoders.39/self_attn/Reshape_2,Reshape,"/encoder/encoders.39/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1528,/encoder/encoders.39/self_attn/Transpose_1,Transpose,"/encoder/encoders.39/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1529,/encoder/encoders.39/self_attn/Transpose_2,Transpose,"/encoder/encoders.39/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1530,/encoder/encoders.39/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.39/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1531,/encoder/encoders.39/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.39/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1532,/encoder/encoders.39/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.39/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.39.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1533,/encoder/encoders.39/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.39/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1534,/encoder/encoders.39/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.39/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1535,/encoder/encoders.39/self_attn/Transpose_3,Transpose,"/encoder/encoders.39/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1536,/encoder/encoders.39/self_attn/Add,Eltwise_Binary,"/encoder/encoders.39/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.39/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1537,/encoder/encoders.39/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.39/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1538,/encoder/encoders.39/self_attn/Transpose_4,Transpose,"/encoder/encoders.39/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1539,/encoder/encoders.39/self_attn/MatMul,MatMul,"/encoder/encoders.39/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.39/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1540,/encoder/encoders.39/self_attn/Softmax,Softmax,"/encoder/encoders.39/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1541,/encoder/encoders.39/self_attn/MatMul_1,MatMul,"/encoder/encoders.39/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.39/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1542,/encoder/encoders.39/self_attn/Transpose_5,Transpose,"/encoder/encoders.39/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1543,/encoder/encoders.39/self_attn/Reshape_3,Reshape,"/encoder/encoders.39/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1544,/encoder/encoders.39/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.39/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.39.self_attn.linear_out.bias
,,,"onnx::MatMul_9885 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.39.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1545,/encoder/encoders.39/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.39/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1546,/encoder/encoders.39/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.39/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.39/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1547,/encoder/encoders.39/Add,Eltwise_Binary,"/encoder/encoders.38/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.39/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1548,/encoder/encoders.39/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.39/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9886 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9887 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1549,/encoder/encoders.39/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.39/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1550,/encoder/encoders.39/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.39/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.39/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.39.feed_forward.w_1.bias
,,,"onnx::MatMul_9888 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.39.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1551,/encoder/encoders.39/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.39/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.39/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1552,/encoder/encoders.39/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.39/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.39/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1553,/encoder/encoders.39/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.39/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.39/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1554,/encoder/encoders.39/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.39/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.39/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.39.feed_forward.w_2.bias
,,,"onnx::MatMul_9889 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.39.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1555,/encoder/encoders.39/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.39/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.39/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1556,/encoder/encoders.39/Add_1,Eltwise_Binary,"/encoder/encoders.39/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.39/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.39/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1557,/encoder/encoders.40/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.39/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9890 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9891 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1558,/encoder/encoders.40/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.40/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1559,/encoder/encoders.40/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.40/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.40.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9892 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.40.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1560,/encoder/encoders.40/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.40/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1561,/encoder/encoders.40/self_attn/Split,Split,"/encoder/encoders.40/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.40/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.40/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1562,/encoder/encoders.40/self_attn/Reshape,Reshape,"/encoder/encoders.40/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1563,/encoder/encoders.40/self_attn/Transpose,Transpose,"/encoder/encoders.40/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1564,/encoder/encoders.40/self_attn/Reshape_1,Reshape,"/encoder/encoders.40/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1565,/encoder/encoders.40/self_attn/Reshape_2,Reshape,"/encoder/encoders.40/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1566,/encoder/encoders.40/self_attn/Transpose_1,Transpose,"/encoder/encoders.40/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1567,/encoder/encoders.40/self_attn/Transpose_2,Transpose,"/encoder/encoders.40/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1568,/encoder/encoders.40/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.40/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1569,/encoder/encoders.40/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.40/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1570,/encoder/encoders.40/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.40/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.40.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1571,/encoder/encoders.40/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.40/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1572,/encoder/encoders.40/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.40/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1573,/encoder/encoders.40/self_attn/Transpose_3,Transpose,"/encoder/encoders.40/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1574,/encoder/encoders.40/self_attn/Add,Eltwise_Binary,"/encoder/encoders.40/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.40/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1575,/encoder/encoders.40/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.40/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1576,/encoder/encoders.40/self_attn/Transpose_4,Transpose,"/encoder/encoders.40/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1577,/encoder/encoders.40/self_attn/MatMul,MatMul,"/encoder/encoders.40/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.40/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1578,/encoder/encoders.40/self_attn/Softmax,Softmax,"/encoder/encoders.40/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1579,/encoder/encoders.40/self_attn/MatMul_1,MatMul,"/encoder/encoders.40/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.40/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1580,/encoder/encoders.40/self_attn/Transpose_5,Transpose,"/encoder/encoders.40/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1581,/encoder/encoders.40/self_attn/Reshape_3,Reshape,"/encoder/encoders.40/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1582,/encoder/encoders.40/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.40/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.40.self_attn.linear_out.bias
,,,"onnx::MatMul_9906 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.40.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1583,/encoder/encoders.40/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.40/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1584,/encoder/encoders.40/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.40/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.40/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1585,/encoder/encoders.40/Add,Eltwise_Binary,"/encoder/encoders.39/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.40/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1586,/encoder/encoders.40/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.40/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9907 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9908 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1587,/encoder/encoders.40/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.40/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1588,/encoder/encoders.40/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.40/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.40/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.40.feed_forward.w_1.bias
,,,"onnx::MatMul_9909 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.40.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1589,/encoder/encoders.40/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.40/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.40/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1590,/encoder/encoders.40/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.40/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.40/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1591,/encoder/encoders.40/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.40/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.40/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1592,/encoder/encoders.40/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.40/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.40/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.40.feed_forward.w_2.bias
,,,"onnx::MatMul_9910 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.40.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1593,/encoder/encoders.40/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.40/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.40/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1594,/encoder/encoders.40/Add_1,Eltwise_Binary,"/encoder/encoders.40/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.40/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.40/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1595,/encoder/encoders.41/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.40/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9911 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9912 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1596,/encoder/encoders.41/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.41/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1597,/encoder/encoders.41/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.41/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.41.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9913 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.41.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1598,/encoder/encoders.41/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.41/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1599,/encoder/encoders.41/self_attn/Split,Split,"/encoder/encoders.41/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.41/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.41/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1600,/encoder/encoders.41/self_attn/Reshape,Reshape,"/encoder/encoders.41/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1601,/encoder/encoders.41/self_attn/Transpose,Transpose,"/encoder/encoders.41/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1602,/encoder/encoders.41/self_attn/Reshape_1,Reshape,"/encoder/encoders.41/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1603,/encoder/encoders.41/self_attn/Reshape_2,Reshape,"/encoder/encoders.41/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1604,/encoder/encoders.41/self_attn/Transpose_1,Transpose,"/encoder/encoders.41/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1605,/encoder/encoders.41/self_attn/Transpose_2,Transpose,"/encoder/encoders.41/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1606,/encoder/encoders.41/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.41/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1607,/encoder/encoders.41/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.41/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1608,/encoder/encoders.41/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.41/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.41.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1609,/encoder/encoders.41/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.41/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1610,/encoder/encoders.41/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.41/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1611,/encoder/encoders.41/self_attn/Transpose_3,Transpose,"/encoder/encoders.41/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1612,/encoder/encoders.41/self_attn/Add,Eltwise_Binary,"/encoder/encoders.41/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.41/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1613,/encoder/encoders.41/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.41/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1614,/encoder/encoders.41/self_attn/Transpose_4,Transpose,"/encoder/encoders.41/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1615,/encoder/encoders.41/self_attn/MatMul,MatMul,"/encoder/encoders.41/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.41/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1616,/encoder/encoders.41/self_attn/Softmax,Softmax,"/encoder/encoders.41/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1617,/encoder/encoders.41/self_attn/MatMul_1,MatMul,"/encoder/encoders.41/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.41/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1618,/encoder/encoders.41/self_attn/Transpose_5,Transpose,"/encoder/encoders.41/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1619,/encoder/encoders.41/self_attn/Reshape_3,Reshape,"/encoder/encoders.41/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1620,/encoder/encoders.41/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.41/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.41.self_attn.linear_out.bias
,,,"onnx::MatMul_9927 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.41.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1621,/encoder/encoders.41/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.41/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1622,/encoder/encoders.41/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.41/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.41/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1623,/encoder/encoders.41/Add,Eltwise_Binary,"/encoder/encoders.40/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.41/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1624,/encoder/encoders.41/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.41/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9928 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9929 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1625,/encoder/encoders.41/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.41/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1626,/encoder/encoders.41/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.41/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.41/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.41.feed_forward.w_1.bias
,,,"onnx::MatMul_9930 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.41.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1627,/encoder/encoders.41/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.41/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.41/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1628,/encoder/encoders.41/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.41/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.41/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1629,/encoder/encoders.41/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.41/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.41/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1630,/encoder/encoders.41/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.41/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.41/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.41.feed_forward.w_2.bias
,,,"onnx::MatMul_9931 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.41.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1631,/encoder/encoders.41/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.41/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.41/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1632,/encoder/encoders.41/Add_1,Eltwise_Binary,"/encoder/encoders.41/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.41/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.41/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1633,/encoder/encoders.42/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.41/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9932 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9933 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1634,/encoder/encoders.42/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.42/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1635,/encoder/encoders.42/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.42/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.42.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9934 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.42.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1636,/encoder/encoders.42/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.42/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1637,/encoder/encoders.42/self_attn/Split,Split,"/encoder/encoders.42/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.42/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.42/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1638,/encoder/encoders.42/self_attn/Reshape,Reshape,"/encoder/encoders.42/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1639,/encoder/encoders.42/self_attn/Transpose,Transpose,"/encoder/encoders.42/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1640,/encoder/encoders.42/self_attn/Reshape_1,Reshape,"/encoder/encoders.42/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1641,/encoder/encoders.42/self_attn/Reshape_2,Reshape,"/encoder/encoders.42/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1642,/encoder/encoders.42/self_attn/Transpose_1,Transpose,"/encoder/encoders.42/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1643,/encoder/encoders.42/self_attn/Transpose_2,Transpose,"/encoder/encoders.42/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1644,/encoder/encoders.42/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.42/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1645,/encoder/encoders.42/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.42/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1646,/encoder/encoders.42/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.42/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.42.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1647,/encoder/encoders.42/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.42/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1648,/encoder/encoders.42/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.42/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1649,/encoder/encoders.42/self_attn/Transpose_3,Transpose,"/encoder/encoders.42/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1650,/encoder/encoders.42/self_attn/Add,Eltwise_Binary,"/encoder/encoders.42/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.42/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1651,/encoder/encoders.42/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.42/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1652,/encoder/encoders.42/self_attn/Transpose_4,Transpose,"/encoder/encoders.42/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1653,/encoder/encoders.42/self_attn/MatMul,MatMul,"/encoder/encoders.42/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.42/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1654,/encoder/encoders.42/self_attn/Softmax,Softmax,"/encoder/encoders.42/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1655,/encoder/encoders.42/self_attn/MatMul_1,MatMul,"/encoder/encoders.42/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.42/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1656,/encoder/encoders.42/self_attn/Transpose_5,Transpose,"/encoder/encoders.42/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1657,/encoder/encoders.42/self_attn/Reshape_3,Reshape,"/encoder/encoders.42/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1658,/encoder/encoders.42/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.42/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.42.self_attn.linear_out.bias
,,,"onnx::MatMul_9948 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.42.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1659,/encoder/encoders.42/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.42/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1660,/encoder/encoders.42/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.42/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.42/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1661,/encoder/encoders.42/Add,Eltwise_Binary,"/encoder/encoders.41/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.42/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1662,/encoder/encoders.42/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.42/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9949 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9950 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1663,/encoder/encoders.42/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.42/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1664,/encoder/encoders.42/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.42/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.42/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.42.feed_forward.w_1.bias
,,,"onnx::MatMul_9951 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.42.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1665,/encoder/encoders.42/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.42/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.42/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1666,/encoder/encoders.42/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.42/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.42/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1667,/encoder/encoders.42/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.42/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.42/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1668,/encoder/encoders.42/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.42/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.42/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.42.feed_forward.w_2.bias
,,,"onnx::MatMul_9952 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.42.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1669,/encoder/encoders.42/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.42/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.42/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1670,/encoder/encoders.42/Add_1,Eltwise_Binary,"/encoder/encoders.42/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.42/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.42/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1671,/encoder/encoders.43/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.42/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9953 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9954 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1672,/encoder/encoders.43/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.43/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1673,/encoder/encoders.43/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.43/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.43.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9955 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.43.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1674,/encoder/encoders.43/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.43/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1675,/encoder/encoders.43/self_attn/Split,Split,"/encoder/encoders.43/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.43/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.43/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1676,/encoder/encoders.43/self_attn/Reshape,Reshape,"/encoder/encoders.43/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1677,/encoder/encoders.43/self_attn/Transpose,Transpose,"/encoder/encoders.43/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1678,/encoder/encoders.43/self_attn/Reshape_1,Reshape,"/encoder/encoders.43/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1679,/encoder/encoders.43/self_attn/Reshape_2,Reshape,"/encoder/encoders.43/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1680,/encoder/encoders.43/self_attn/Transpose_1,Transpose,"/encoder/encoders.43/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1681,/encoder/encoders.43/self_attn/Transpose_2,Transpose,"/encoder/encoders.43/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1682,/encoder/encoders.43/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.43/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1683,/encoder/encoders.43/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.43/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1684,/encoder/encoders.43/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.43/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.43.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1685,/encoder/encoders.43/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.43/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1686,/encoder/encoders.43/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.43/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1687,/encoder/encoders.43/self_attn/Transpose_3,Transpose,"/encoder/encoders.43/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1688,/encoder/encoders.43/self_attn/Add,Eltwise_Binary,"/encoder/encoders.43/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.43/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1689,/encoder/encoders.43/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.43/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1690,/encoder/encoders.43/self_attn/Transpose_4,Transpose,"/encoder/encoders.43/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1691,/encoder/encoders.43/self_attn/MatMul,MatMul,"/encoder/encoders.43/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.43/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1692,/encoder/encoders.43/self_attn/Softmax,Softmax,"/encoder/encoders.43/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1693,/encoder/encoders.43/self_attn/MatMul_1,MatMul,"/encoder/encoders.43/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.43/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1694,/encoder/encoders.43/self_attn/Transpose_5,Transpose,"/encoder/encoders.43/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1695,/encoder/encoders.43/self_attn/Reshape_3,Reshape,"/encoder/encoders.43/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1696,/encoder/encoders.43/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.43/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.43.self_attn.linear_out.bias
,,,"onnx::MatMul_9969 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.43.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1697,/encoder/encoders.43/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.43/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1698,/encoder/encoders.43/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.43/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.43/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1699,/encoder/encoders.43/Add,Eltwise_Binary,"/encoder/encoders.42/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.43/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1700,/encoder/encoders.43/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.43/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9970 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9971 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1701,/encoder/encoders.43/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.43/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1702,/encoder/encoders.43/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.43/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.43/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.43.feed_forward.w_1.bias
,,,"onnx::MatMul_9972 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.43.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1703,/encoder/encoders.43/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.43/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.43/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1704,/encoder/encoders.43/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.43/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.43/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1705,/encoder/encoders.43/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.43/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.43/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1706,/encoder/encoders.43/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.43/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.43/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.43.feed_forward.w_2.bias
,,,"onnx::MatMul_9973 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.43.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1707,/encoder/encoders.43/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.43/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.43/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1708,/encoder/encoders.43/Add_1,Eltwise_Binary,"/encoder/encoders.43/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.43/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.43/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1709,/encoder/encoders.44/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.43/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9974 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9975 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1710,/encoder/encoders.44/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.44/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1711,/encoder/encoders.44/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.44/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.44.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9976 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.44.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1712,/encoder/encoders.44/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.44/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1713,/encoder/encoders.44/self_attn/Split,Split,"/encoder/encoders.44/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.44/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.44/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1714,/encoder/encoders.44/self_attn/Reshape,Reshape,"/encoder/encoders.44/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1715,/encoder/encoders.44/self_attn/Transpose,Transpose,"/encoder/encoders.44/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1716,/encoder/encoders.44/self_attn/Reshape_1,Reshape,"/encoder/encoders.44/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1717,/encoder/encoders.44/self_attn/Reshape_2,Reshape,"/encoder/encoders.44/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1718,/encoder/encoders.44/self_attn/Transpose_1,Transpose,"/encoder/encoders.44/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1719,/encoder/encoders.44/self_attn/Transpose_2,Transpose,"/encoder/encoders.44/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1720,/encoder/encoders.44/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.44/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1721,/encoder/encoders.44/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.44/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1722,/encoder/encoders.44/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.44/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.44.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1723,/encoder/encoders.44/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.44/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1724,/encoder/encoders.44/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.44/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1725,/encoder/encoders.44/self_attn/Transpose_3,Transpose,"/encoder/encoders.44/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1726,/encoder/encoders.44/self_attn/Add,Eltwise_Binary,"/encoder/encoders.44/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.44/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1727,/encoder/encoders.44/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.44/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1728,/encoder/encoders.44/self_attn/Transpose_4,Transpose,"/encoder/encoders.44/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1729,/encoder/encoders.44/self_attn/MatMul,MatMul,"/encoder/encoders.44/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.44/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1730,/encoder/encoders.44/self_attn/Softmax,Softmax,"/encoder/encoders.44/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1731,/encoder/encoders.44/self_attn/MatMul_1,MatMul,"/encoder/encoders.44/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.44/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1732,/encoder/encoders.44/self_attn/Transpose_5,Transpose,"/encoder/encoders.44/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1733,/encoder/encoders.44/self_attn/Reshape_3,Reshape,"/encoder/encoders.44/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1734,/encoder/encoders.44/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.44/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.44.self_attn.linear_out.bias
,,,"onnx::MatMul_9990 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.44.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1735,/encoder/encoders.44/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.44/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1736,/encoder/encoders.44/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.44/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.44/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1737,/encoder/encoders.44/Add,Eltwise_Binary,"/encoder/encoders.43/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.44/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1738,/encoder/encoders.44/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.44/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9991 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9992 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1739,/encoder/encoders.44/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.44/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1740,/encoder/encoders.44/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.44/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.44/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.44.feed_forward.w_1.bias
,,,"onnx::MatMul_9993 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.44.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1741,/encoder/encoders.44/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.44/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.44/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1742,/encoder/encoders.44/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.44/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.44/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1743,/encoder/encoders.44/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.44/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.44/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1744,/encoder/encoders.44/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.44/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.44/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.44.feed_forward.w_2.bias
,,,"onnx::MatMul_9994 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.44.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1745,/encoder/encoders.44/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.44/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.44/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1746,/encoder/encoders.44/Add_1,Eltwise_Binary,"/encoder/encoders.44/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.44/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.44/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1747,/encoder/encoders.45/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.44/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_9995 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_9996 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1748,/encoder/encoders.45/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.45/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1749,/encoder/encoders.45/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.45/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.45.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_9997 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.45.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1750,/encoder/encoders.45/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.45/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1751,/encoder/encoders.45/self_attn/Split,Split,"/encoder/encoders.45/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.45/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.45/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1752,/encoder/encoders.45/self_attn/Reshape,Reshape,"/encoder/encoders.45/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1753,/encoder/encoders.45/self_attn/Transpose,Transpose,"/encoder/encoders.45/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1754,/encoder/encoders.45/self_attn/Reshape_1,Reshape,"/encoder/encoders.45/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1755,/encoder/encoders.45/self_attn/Reshape_2,Reshape,"/encoder/encoders.45/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1756,/encoder/encoders.45/self_attn/Transpose_1,Transpose,"/encoder/encoders.45/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1757,/encoder/encoders.45/self_attn/Transpose_2,Transpose,"/encoder/encoders.45/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1758,/encoder/encoders.45/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.45/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1759,/encoder/encoders.45/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.45/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1760,/encoder/encoders.45/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.45/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.45.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1761,/encoder/encoders.45/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.45/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1762,/encoder/encoders.45/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.45/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1763,/encoder/encoders.45/self_attn/Transpose_3,Transpose,"/encoder/encoders.45/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1764,/encoder/encoders.45/self_attn/Add,Eltwise_Binary,"/encoder/encoders.45/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.45/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1765,/encoder/encoders.45/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.45/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1766,/encoder/encoders.45/self_attn/Transpose_4,Transpose,"/encoder/encoders.45/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1767,/encoder/encoders.45/self_attn/MatMul,MatMul,"/encoder/encoders.45/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.45/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1768,/encoder/encoders.45/self_attn/Softmax,Softmax,"/encoder/encoders.45/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1769,/encoder/encoders.45/self_attn/MatMul_1,MatMul,"/encoder/encoders.45/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.45/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1770,/encoder/encoders.45/self_attn/Transpose_5,Transpose,"/encoder/encoders.45/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1771,/encoder/encoders.45/self_attn/Reshape_3,Reshape,"/encoder/encoders.45/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1772,/encoder/encoders.45/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.45/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.45.self_attn.linear_out.bias
,,,"onnx::MatMul_10011 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.45.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1773,/encoder/encoders.45/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.45/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1774,/encoder/encoders.45/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.45/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.45/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1775,/encoder/encoders.45/Add,Eltwise_Binary,"/encoder/encoders.44/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.45/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1776,/encoder/encoders.45/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.45/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10012 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10013 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1777,/encoder/encoders.45/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.45/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1778,/encoder/encoders.45/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.45/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.45/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.45.feed_forward.w_1.bias
,,,"onnx::MatMul_10014 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.45.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1779,/encoder/encoders.45/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.45/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.45/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1780,/encoder/encoders.45/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.45/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.45/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1781,/encoder/encoders.45/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.45/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.45/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1782,/encoder/encoders.45/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.45/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.45/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.45.feed_forward.w_2.bias
,,,"onnx::MatMul_10015 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.45.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1783,/encoder/encoders.45/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.45/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.45/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1784,/encoder/encoders.45/Add_1,Eltwise_Binary,"/encoder/encoders.45/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.45/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.45/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1785,/encoder/encoders.46/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.45/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10016 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10017 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1786,/encoder/encoders.46/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.46/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1787,/encoder/encoders.46/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.46/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.46.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10018 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.46.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1788,/encoder/encoders.46/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.46/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1789,/encoder/encoders.46/self_attn/Split,Split,"/encoder/encoders.46/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.46/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.46/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1790,/encoder/encoders.46/self_attn/Reshape,Reshape,"/encoder/encoders.46/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1791,/encoder/encoders.46/self_attn/Transpose,Transpose,"/encoder/encoders.46/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1792,/encoder/encoders.46/self_attn/Reshape_1,Reshape,"/encoder/encoders.46/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1793,/encoder/encoders.46/self_attn/Reshape_2,Reshape,"/encoder/encoders.46/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1794,/encoder/encoders.46/self_attn/Transpose_1,Transpose,"/encoder/encoders.46/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1795,/encoder/encoders.46/self_attn/Transpose_2,Transpose,"/encoder/encoders.46/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1796,/encoder/encoders.46/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.46/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1797,/encoder/encoders.46/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.46/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1798,/encoder/encoders.46/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.46/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.46.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1799,/encoder/encoders.46/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.46/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1800,/encoder/encoders.46/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.46/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1801,/encoder/encoders.46/self_attn/Transpose_3,Transpose,"/encoder/encoders.46/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1802,/encoder/encoders.46/self_attn/Add,Eltwise_Binary,"/encoder/encoders.46/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.46/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1803,/encoder/encoders.46/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.46/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1804,/encoder/encoders.46/self_attn/Transpose_4,Transpose,"/encoder/encoders.46/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1805,/encoder/encoders.46/self_attn/MatMul,MatMul,"/encoder/encoders.46/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.46/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1806,/encoder/encoders.46/self_attn/Softmax,Softmax,"/encoder/encoders.46/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1807,/encoder/encoders.46/self_attn/MatMul_1,MatMul,"/encoder/encoders.46/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.46/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1808,/encoder/encoders.46/self_attn/Transpose_5,Transpose,"/encoder/encoders.46/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1809,/encoder/encoders.46/self_attn/Reshape_3,Reshape,"/encoder/encoders.46/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1810,/encoder/encoders.46/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.46/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.46.self_attn.linear_out.bias
,,,"onnx::MatMul_10032 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.46.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1811,/encoder/encoders.46/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.46/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1812,/encoder/encoders.46/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.46/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.46/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1813,/encoder/encoders.46/Add,Eltwise_Binary,"/encoder/encoders.45/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.46/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1814,/encoder/encoders.46/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.46/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10033 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10034 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1815,/encoder/encoders.46/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.46/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1816,/encoder/encoders.46/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.46/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.46/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.46.feed_forward.w_1.bias
,,,"onnx::MatMul_10035 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.46.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1817,/encoder/encoders.46/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.46/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.46/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1818,/encoder/encoders.46/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.46/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.46/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1819,/encoder/encoders.46/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.46/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.46/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1820,/encoder/encoders.46/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.46/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.46/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.46.feed_forward.w_2.bias
,,,"onnx::MatMul_10036 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.46.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1821,/encoder/encoders.46/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.46/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.46/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1822,/encoder/encoders.46/Add_1,Eltwise_Binary,"/encoder/encoders.46/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.46/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.46/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1823,/encoder/encoders.47/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.46/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10037 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10038 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1824,/encoder/encoders.47/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.47/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1825,/encoder/encoders.47/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.47/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.47.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10039 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.47.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1826,/encoder/encoders.47/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.47/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1827,/encoder/encoders.47/self_attn/Split,Split,"/encoder/encoders.47/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.47/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.47/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1828,/encoder/encoders.47/self_attn/Reshape,Reshape,"/encoder/encoders.47/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1829,/encoder/encoders.47/self_attn/Transpose,Transpose,"/encoder/encoders.47/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1830,/encoder/encoders.47/self_attn/Reshape_1,Reshape,"/encoder/encoders.47/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1831,/encoder/encoders.47/self_attn/Reshape_2,Reshape,"/encoder/encoders.47/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1832,/encoder/encoders.47/self_attn/Transpose_1,Transpose,"/encoder/encoders.47/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1833,/encoder/encoders.47/self_attn/Transpose_2,Transpose,"/encoder/encoders.47/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1834,/encoder/encoders.47/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.47/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1835,/encoder/encoders.47/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.47/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1836,/encoder/encoders.47/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.47/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.47.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1837,/encoder/encoders.47/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.47/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1838,/encoder/encoders.47/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.47/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1839,/encoder/encoders.47/self_attn/Transpose_3,Transpose,"/encoder/encoders.47/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1840,/encoder/encoders.47/self_attn/Add,Eltwise_Binary,"/encoder/encoders.47/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.47/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1841,/encoder/encoders.47/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.47/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1842,/encoder/encoders.47/self_attn/Transpose_4,Transpose,"/encoder/encoders.47/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1843,/encoder/encoders.47/self_attn/MatMul,MatMul,"/encoder/encoders.47/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.47/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1844,/encoder/encoders.47/self_attn/Softmax,Softmax,"/encoder/encoders.47/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1845,/encoder/encoders.47/self_attn/MatMul_1,MatMul,"/encoder/encoders.47/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.47/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1846,/encoder/encoders.47/self_attn/Transpose_5,Transpose,"/encoder/encoders.47/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1847,/encoder/encoders.47/self_attn/Reshape_3,Reshape,"/encoder/encoders.47/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1848,/encoder/encoders.47/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.47/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.47.self_attn.linear_out.bias
,,,"onnx::MatMul_10053 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.47.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1849,/encoder/encoders.47/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.47/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1850,/encoder/encoders.47/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.47/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.47/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1851,/encoder/encoders.47/Add,Eltwise_Binary,"/encoder/encoders.46/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.47/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1852,/encoder/encoders.47/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.47/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10054 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10055 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1853,/encoder/encoders.47/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.47/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1854,/encoder/encoders.47/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.47/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.47/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.47.feed_forward.w_1.bias
,,,"onnx::MatMul_10056 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.47.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1855,/encoder/encoders.47/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.47/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.47/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1856,/encoder/encoders.47/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.47/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.47/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1857,/encoder/encoders.47/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.47/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.47/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1858,/encoder/encoders.47/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.47/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.47/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.47.feed_forward.w_2.bias
,,,"onnx::MatMul_10057 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.47.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1859,/encoder/encoders.47/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.47/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.47/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1860,/encoder/encoders.47/Add_1,Eltwise_Binary,"/encoder/encoders.47/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.47/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.47/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1861,/encoder/encoders.48/norm1/LayerNormalization,LayerNorm,"/encoder/encoders.47/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10058 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10059 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1862,/encoder/encoders.48/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/encoders.48/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1863,/encoder/encoders.48/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/encoders.48/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.encoders.48.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10060 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.48.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1864,/encoder/encoders.48/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/encoders.48/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1865,/encoder/encoders.48/self_attn/Split,Split,"/encoder/encoders.48/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/encoders.48/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/encoders.48/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1866,/encoder/encoders.48/self_attn/Reshape,Reshape,"/encoder/encoders.48/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1867,/encoder/encoders.48/self_attn/Transpose,Transpose,"/encoder/encoders.48/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1868,/encoder/encoders.48/self_attn/Reshape_1,Reshape,"/encoder/encoders.48/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1869,/encoder/encoders.48/self_attn/Reshape_2,Reshape,"/encoder/encoders.48/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1870,/encoder/encoders.48/self_attn/Transpose_1,Transpose,"/encoder/encoders.48/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1871,/encoder/encoders.48/self_attn/Transpose_2,Transpose,"/encoder/encoders.48/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1872,/encoder/encoders.48/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/encoders.48/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1873,/encoder/encoders.48/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/encoders.48/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1874,/encoder/encoders.48/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/encoders.48/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.encoders.48.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1875,/encoder/encoders.48/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/encoders.48/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1876,/encoder/encoders.48/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/encoders.48/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1877,/encoder/encoders.48/self_attn/Transpose_3,Transpose,"/encoder/encoders.48/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1878,/encoder/encoders.48/self_attn/Add,Eltwise_Binary,"/encoder/encoders.48/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.48/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1879,/encoder/encoders.48/self_attn/Mul,Eltwise_Binary,"/encoder/encoders.48/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1880,/encoder/encoders.48/self_attn/Transpose_4,Transpose,"/encoder/encoders.48/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1881,/encoder/encoders.48/self_attn/MatMul,MatMul,"/encoder/encoders.48/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.48/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1882,/encoder/encoders.48/self_attn/Softmax,Softmax,"/encoder/encoders.48/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1883,/encoder/encoders.48/self_attn/MatMul_1,MatMul,"/encoder/encoders.48/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/encoders.48/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1884,/encoder/encoders.48/self_attn/Transpose_5,Transpose,"/encoder/encoders.48/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1885,/encoder/encoders.48/self_attn/Reshape_3,Reshape,"/encoder/encoders.48/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1886,/encoder/encoders.48/self_attn/linear_out/MatMul,FullyConnected,"/encoder/encoders.48/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.48.self_attn.linear_out.bias
,,,"onnx::MatMul_10074 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.48.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1887,/encoder/encoders.48/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/encoders.48/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1888,/encoder/encoders.48/self_attn/Add_1,Eltwise_Binary,"/encoder/encoders.48/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.48/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1889,/encoder/encoders.48/Add,Eltwise_Binary,"/encoder/encoders.47/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.48/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1890,/encoder/encoders.48/norm2/LayerNormalization,LayerNorm,"/encoder/encoders.48/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10075 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10076 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1891,/encoder/encoders.48/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/encoders.48/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1892,/encoder/encoders.48/feed_forward/w_1/MatMul,FullyConnected,"/encoder/encoders.48/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.48/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.encoders.48.feed_forward.w_1.bias
,,,"onnx::MatMul_10077 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.48.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1893,/encoder/encoders.48/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/encoders.48/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.48/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1894,/encoder/encoders.48/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/encoders.48/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.48/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1895,/encoder/encoders.48/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/encoders.48/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/encoders.48/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1896,/encoder/encoders.48/feed_forward/w_2/MatMul,FullyConnected,"/encoder/encoders.48/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/encoders.48/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.encoders.48.feed_forward.w_2.bias
,,,"onnx::MatMul_10078 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.encoders.48.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1897,/encoder/encoders.48/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/encoders.48/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/encoders.48/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1898,/encoder/encoders.48/Add_1,Eltwise_Binary,"/encoder/encoders.48/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/encoders.48/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/encoders.48/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1899,/encoder/after_norm/LayerNormalization,LayerNorm,"/encoder/encoders.48/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/after_norm/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10079 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10080 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1900,/encoder/tp_encoders.0/norm1/LayerNormalization,LayerNorm,"/encoder/after_norm/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10081 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10082 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1901,/encoder/tp_encoders.0/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.0/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1902,/encoder/tp_encoders.0/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.0/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.0.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10083 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.0.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1903,/encoder/tp_encoders.0/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.0/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1904,/encoder/tp_encoders.0/self_attn/Split,Split,"/encoder/tp_encoders.0/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.0/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1905,/encoder/tp_encoders.0/self_attn/Reshape,Reshape,"/encoder/tp_encoders.0/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1906,/encoder/tp_encoders.0/self_attn/Transpose,Transpose,"/encoder/tp_encoders.0/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1907,/encoder/tp_encoders.0/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.0/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1908,/encoder/tp_encoders.0/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1909,/encoder/tp_encoders.0/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.0/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1910,/encoder/tp_encoders.0/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1911,/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.0/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1912,/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1913,/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.0.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1914,/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1915,/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1916,/encoder/tp_encoders.0/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.0/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1917,/encoder/tp_encoders.0/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.0/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.0/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1918,/encoder/tp_encoders.0/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.0/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1919,/encoder/tp_encoders.0/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.0/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1920,/encoder/tp_encoders.0/self_attn/MatMul,MatMul,"/encoder/tp_encoders.0/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.0/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1921,/encoder/tp_encoders.0/self_attn/Softmax,Softmax,"/encoder/tp_encoders.0/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1922,/encoder/tp_encoders.0/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.0/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.0/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1923,/encoder/tp_encoders.0/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.0/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1924,/encoder/tp_encoders.0/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.0/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1925,/encoder/tp_encoders.0/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.0/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.0.self_attn.linear_out.bias
,,,"onnx::MatMul_10097 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.0.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1926,/encoder/tp_encoders.0/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.0/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1927,/encoder/tp_encoders.0/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.0/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.0/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1928,/encoder/tp_encoders.0/Add,Eltwise_Binary,"/encoder/after_norm/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.0/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1929,/encoder/tp_encoders.0/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.0/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10098 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10099 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1930,/encoder/tp_encoders.0/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.0/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1931,/encoder/tp_encoders.0/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.0/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.0.feed_forward.w_1.bias
,,,"onnx::MatMul_10100 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.0.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1932,/encoder/tp_encoders.0/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.0/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.0/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1933,/encoder/tp_encoders.0/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.0/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.0/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1934,/encoder/tp_encoders.0/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.0/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.0/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1935,/encoder/tp_encoders.0/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.0/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.0/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.0.feed_forward.w_2.bias
,,,"onnx::MatMul_10101 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.0.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1936,/encoder/tp_encoders.0/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.0/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1937,/encoder/tp_encoders.0/Add_1,Eltwise_Binary,"/encoder/tp_encoders.0/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.0/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.0/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1938,/encoder/tp_encoders.1/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.0/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10102 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10103 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1939,/encoder/tp_encoders.1/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.1/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1940,/encoder/tp_encoders.1/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.1/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.1.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10104 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.1.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1941,/encoder/tp_encoders.1/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.1/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1942,/encoder/tp_encoders.1/self_attn/Split,Split,"/encoder/tp_encoders.1/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.1/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.1/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1943,/encoder/tp_encoders.1/self_attn/Reshape,Reshape,"/encoder/tp_encoders.1/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1944,/encoder/tp_encoders.1/self_attn/Transpose,Transpose,"/encoder/tp_encoders.1/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1945,/encoder/tp_encoders.1/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.1/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1946,/encoder/tp_encoders.1/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.1/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1947,/encoder/tp_encoders.1/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.1/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1948,/encoder/tp_encoders.1/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.1/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1949,/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.1/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1950,/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1951,/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.1.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1952,/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1953,/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1954,/encoder/tp_encoders.1/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.1/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1955,/encoder/tp_encoders.1/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.1/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.1/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1956,/encoder/tp_encoders.1/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.1/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1957,/encoder/tp_encoders.1/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.1/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1958,/encoder/tp_encoders.1/self_attn/MatMul,MatMul,"/encoder/tp_encoders.1/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.1/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1959,/encoder/tp_encoders.1/self_attn/Softmax,Softmax,"/encoder/tp_encoders.1/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1960,/encoder/tp_encoders.1/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.1/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.1/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1961,/encoder/tp_encoders.1/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.1/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1962,/encoder/tp_encoders.1/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.1/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1963,/encoder/tp_encoders.1/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.1/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.1.self_attn.linear_out.bias
,,,"onnx::MatMul_10118 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.1.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
1964,/encoder/tp_encoders.1/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.1/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1965,/encoder/tp_encoders.1/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.1/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.1/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1966,/encoder/tp_encoders.1/Add,Eltwise_Binary,"/encoder/tp_encoders.0/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.1/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1967,/encoder/tp_encoders.1/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10119 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10120 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1968,/encoder/tp_encoders.1/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.1/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1969,/encoder/tp_encoders.1/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.1/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.1.feed_forward.w_1.bias
,,,"onnx::MatMul_10121 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.1.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
1970,/encoder/tp_encoders.1/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.1/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.1/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
1971,/encoder/tp_encoders.1/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.1/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.1/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
1972,/encoder/tp_encoders.1/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.1/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.1/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
1973,/encoder/tp_encoders.1/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.1/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.1/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.1.feed_forward.w_2.bias
,,,"onnx::MatMul_10122 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.1.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
1974,/encoder/tp_encoders.1/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.1/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
1975,/encoder/tp_encoders.1/Add_1,Eltwise_Binary,"/encoder/tp_encoders.1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.1/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.1/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1976,/encoder/tp_encoders.2/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.1/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10123 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10124 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
1977,/encoder/tp_encoders.2/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.2/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
1978,/encoder/tp_encoders.2/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.2/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.2.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10125 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.2.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
1979,/encoder/tp_encoders.2/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.2/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
1980,/encoder/tp_encoders.2/self_attn/Split,Split,"/encoder/tp_encoders.2/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.2/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.2/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
1981,/encoder/tp_encoders.2/self_attn/Reshape,Reshape,"/encoder/tp_encoders.2/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1982,/encoder/tp_encoders.2/self_attn/Transpose,Transpose,"/encoder/tp_encoders.2/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1983,/encoder/tp_encoders.2/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.2/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1984,/encoder/tp_encoders.2/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.2/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
1985,/encoder/tp_encoders.2/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.2/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
1986,/encoder/tp_encoders.2/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.2/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1987,/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.2/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
1988,/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1989,/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.2.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
1990,/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
1991,/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
1992,/encoder/tp_encoders.2/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.2/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
1993,/encoder/tp_encoders.2/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.2/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.2/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
1994,/encoder/tp_encoders.2/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.2/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
1995,/encoder/tp_encoders.2/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.2/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
1996,/encoder/tp_encoders.2/self_attn/MatMul,MatMul,"/encoder/tp_encoders.2/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.2/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
1997,/encoder/tp_encoders.2/self_attn/Softmax,Softmax,"/encoder/tp_encoders.2/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
1998,/encoder/tp_encoders.2/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.2/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.2/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
1999,/encoder/tp_encoders.2/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.2/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2000,/encoder/tp_encoders.2/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.2/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2001,/encoder/tp_encoders.2/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.2/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.2.self_attn.linear_out.bias
,,,"onnx::MatMul_10139 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.2.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2002,/encoder/tp_encoders.2/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.2/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2003,/encoder/tp_encoders.2/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.2/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.2/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2004,/encoder/tp_encoders.2/Add,Eltwise_Binary,"/encoder/tp_encoders.1/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.2/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2005,/encoder/tp_encoders.2/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10140 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10141 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2006,/encoder/tp_encoders.2/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.2/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2007,/encoder/tp_encoders.2/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.2/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.2.feed_forward.w_1.bias
,,,"onnx::MatMul_10142 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.2.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2008,/encoder/tp_encoders.2/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.2/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.2/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2009,/encoder/tp_encoders.2/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.2/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.2/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2010,/encoder/tp_encoders.2/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.2/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.2/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2011,/encoder/tp_encoders.2/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.2/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.2/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.2.feed_forward.w_2.bias
,,,"onnx::MatMul_10143 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.2.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2012,/encoder/tp_encoders.2/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.2/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2013,/encoder/tp_encoders.2/Add_1,Eltwise_Binary,"/encoder/tp_encoders.2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.2/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.2/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2014,/encoder/tp_encoders.3/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.2/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10144 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10145 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2015,/encoder/tp_encoders.3/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.3/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2016,/encoder/tp_encoders.3/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.3/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.3.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10146 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.3.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2017,/encoder/tp_encoders.3/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.3/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2018,/encoder/tp_encoders.3/self_attn/Split,Split,"/encoder/tp_encoders.3/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.3/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.3/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2019,/encoder/tp_encoders.3/self_attn/Reshape,Reshape,"/encoder/tp_encoders.3/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2020,/encoder/tp_encoders.3/self_attn/Transpose,Transpose,"/encoder/tp_encoders.3/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2021,/encoder/tp_encoders.3/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.3/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2022,/encoder/tp_encoders.3/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.3/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2023,/encoder/tp_encoders.3/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.3/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2024,/encoder/tp_encoders.3/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.3/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2025,/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.3/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2026,/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2027,/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.3.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2028,/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2029,/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2030,/encoder/tp_encoders.3/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.3/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2031,/encoder/tp_encoders.3/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.3/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.3/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2032,/encoder/tp_encoders.3/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.3/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2033,/encoder/tp_encoders.3/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.3/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2034,/encoder/tp_encoders.3/self_attn/MatMul,MatMul,"/encoder/tp_encoders.3/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.3/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2035,/encoder/tp_encoders.3/self_attn/Softmax,Softmax,"/encoder/tp_encoders.3/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2036,/encoder/tp_encoders.3/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.3/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.3/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2037,/encoder/tp_encoders.3/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.3/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2038,/encoder/tp_encoders.3/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.3/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2039,/encoder/tp_encoders.3/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.3/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.3.self_attn.linear_out.bias
,,,"onnx::MatMul_10160 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.3.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2040,/encoder/tp_encoders.3/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.3/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2041,/encoder/tp_encoders.3/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.3/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.3/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2042,/encoder/tp_encoders.3/Add,Eltwise_Binary,"/encoder/tp_encoders.2/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.3/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2043,/encoder/tp_encoders.3/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.3/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10161 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10162 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2044,/encoder/tp_encoders.3/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.3/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2045,/encoder/tp_encoders.3/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.3/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.3.feed_forward.w_1.bias
,,,"onnx::MatMul_10163 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.3.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2046,/encoder/tp_encoders.3/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.3/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.3/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2047,/encoder/tp_encoders.3/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.3/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.3/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2048,/encoder/tp_encoders.3/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.3/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.3/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2049,/encoder/tp_encoders.3/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.3/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.3/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.3.feed_forward.w_2.bias
,,,"onnx::MatMul_10164 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.3.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2050,/encoder/tp_encoders.3/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.3/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2051,/encoder/tp_encoders.3/Add_1,Eltwise_Binary,"/encoder/tp_encoders.3/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.3/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.3/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2052,/encoder/tp_encoders.4/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.3/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10165 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10166 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2053,/encoder/tp_encoders.4/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.4/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2054,/encoder/tp_encoders.4/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.4/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.4.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10167 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.4.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2055,/encoder/tp_encoders.4/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.4/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2056,/encoder/tp_encoders.4/self_attn/Split,Split,"/encoder/tp_encoders.4/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.4/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.4/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2057,/encoder/tp_encoders.4/self_attn/Reshape,Reshape,"/encoder/tp_encoders.4/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2058,/encoder/tp_encoders.4/self_attn/Transpose,Transpose,"/encoder/tp_encoders.4/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2059,/encoder/tp_encoders.4/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.4/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2060,/encoder/tp_encoders.4/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.4/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2061,/encoder/tp_encoders.4/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.4/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2062,/encoder/tp_encoders.4/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.4/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2063,/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.4/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2064,/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2065,/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.4.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2066,/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2067,/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2068,/encoder/tp_encoders.4/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.4/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2069,/encoder/tp_encoders.4/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.4/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.4/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2070,/encoder/tp_encoders.4/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.4/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2071,/encoder/tp_encoders.4/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.4/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2072,/encoder/tp_encoders.4/self_attn/MatMul,MatMul,"/encoder/tp_encoders.4/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.4/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2073,/encoder/tp_encoders.4/self_attn/Softmax,Softmax,"/encoder/tp_encoders.4/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2074,/encoder/tp_encoders.4/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.4/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.4/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2075,/encoder/tp_encoders.4/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.4/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2076,/encoder/tp_encoders.4/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.4/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2077,/encoder/tp_encoders.4/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.4/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.4.self_attn.linear_out.bias
,,,"onnx::MatMul_10181 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.4.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2078,/encoder/tp_encoders.4/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.4/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2079,/encoder/tp_encoders.4/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.4/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.4/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2080,/encoder/tp_encoders.4/Add,Eltwise_Binary,"/encoder/tp_encoders.3/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.4/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2081,/encoder/tp_encoders.4/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.4/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10182 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10183 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2082,/encoder/tp_encoders.4/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.4/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2083,/encoder/tp_encoders.4/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.4/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.4.feed_forward.w_1.bias
,,,"onnx::MatMul_10184 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.4.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2084,/encoder/tp_encoders.4/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.4/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.4/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2085,/encoder/tp_encoders.4/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.4/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.4/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2086,/encoder/tp_encoders.4/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.4/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.4/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2087,/encoder/tp_encoders.4/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.4/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.4/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.4.feed_forward.w_2.bias
,,,"onnx::MatMul_10185 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.4.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2088,/encoder/tp_encoders.4/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.4/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2089,/encoder/tp_encoders.4/Add_1,Eltwise_Binary,"/encoder/tp_encoders.4/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.4/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.4/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2090,/encoder/tp_encoders.5/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.4/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10186 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10187 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2091,/encoder/tp_encoders.5/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.5/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2092,/encoder/tp_encoders.5/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.5/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.5.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10188 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.5.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2093,/encoder/tp_encoders.5/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.5/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2094,/encoder/tp_encoders.5/self_attn/Split,Split,"/encoder/tp_encoders.5/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.5/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.5/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2095,/encoder/tp_encoders.5/self_attn/Reshape,Reshape,"/encoder/tp_encoders.5/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2096,/encoder/tp_encoders.5/self_attn/Transpose,Transpose,"/encoder/tp_encoders.5/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2097,/encoder/tp_encoders.5/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.5/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2098,/encoder/tp_encoders.5/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.5/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2099,/encoder/tp_encoders.5/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.5/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2100,/encoder/tp_encoders.5/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.5/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2101,/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.5/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2102,/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2103,/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.5.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2104,/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2105,/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2106,/encoder/tp_encoders.5/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.5/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2107,/encoder/tp_encoders.5/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.5/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.5/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2108,/encoder/tp_encoders.5/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.5/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2109,/encoder/tp_encoders.5/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.5/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2110,/encoder/tp_encoders.5/self_attn/MatMul,MatMul,"/encoder/tp_encoders.5/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.5/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2111,/encoder/tp_encoders.5/self_attn/Softmax,Softmax,"/encoder/tp_encoders.5/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2112,/encoder/tp_encoders.5/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.5/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.5/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2113,/encoder/tp_encoders.5/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.5/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2114,/encoder/tp_encoders.5/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.5/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2115,/encoder/tp_encoders.5/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.5/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.5.self_attn.linear_out.bias
,,,"onnx::MatMul_10202 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.5.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2116,/encoder/tp_encoders.5/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.5/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2117,/encoder/tp_encoders.5/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.5/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.5/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2118,/encoder/tp_encoders.5/Add,Eltwise_Binary,"/encoder/tp_encoders.4/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.5/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2119,/encoder/tp_encoders.5/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.5/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10203 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10204 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2120,/encoder/tp_encoders.5/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.5/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2121,/encoder/tp_encoders.5/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.5/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.5.feed_forward.w_1.bias
,,,"onnx::MatMul_10205 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.5.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2122,/encoder/tp_encoders.5/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.5/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.5/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2123,/encoder/tp_encoders.5/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.5/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.5/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2124,/encoder/tp_encoders.5/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.5/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.5/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2125,/encoder/tp_encoders.5/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.5/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.5/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.5.feed_forward.w_2.bias
,,,"onnx::MatMul_10206 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.5.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2126,/encoder/tp_encoders.5/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.5/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2127,/encoder/tp_encoders.5/Add_1,Eltwise_Binary,"/encoder/tp_encoders.5/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.5/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.5/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2128,/encoder/tp_encoders.6/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.5/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10207 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10208 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2129,/encoder/tp_encoders.6/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.6/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2130,/encoder/tp_encoders.6/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.6/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.6.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10209 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.6.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2131,/encoder/tp_encoders.6/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.6/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2132,/encoder/tp_encoders.6/self_attn/Split,Split,"/encoder/tp_encoders.6/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.6/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.6/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2133,/encoder/tp_encoders.6/self_attn/Reshape,Reshape,"/encoder/tp_encoders.6/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2134,/encoder/tp_encoders.6/self_attn/Transpose,Transpose,"/encoder/tp_encoders.6/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2135,/encoder/tp_encoders.6/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.6/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2136,/encoder/tp_encoders.6/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.6/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2137,/encoder/tp_encoders.6/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.6/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2138,/encoder/tp_encoders.6/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.6/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2139,/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.6/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2140,/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2141,/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.6.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2142,/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2143,/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2144,/encoder/tp_encoders.6/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.6/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2145,/encoder/tp_encoders.6/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.6/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.6/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2146,/encoder/tp_encoders.6/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.6/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2147,/encoder/tp_encoders.6/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.6/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2148,/encoder/tp_encoders.6/self_attn/MatMul,MatMul,"/encoder/tp_encoders.6/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.6/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2149,/encoder/tp_encoders.6/self_attn/Softmax,Softmax,"/encoder/tp_encoders.6/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2150,/encoder/tp_encoders.6/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.6/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.6/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2151,/encoder/tp_encoders.6/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.6/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2152,/encoder/tp_encoders.6/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.6/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2153,/encoder/tp_encoders.6/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.6/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.6.self_attn.linear_out.bias
,,,"onnx::MatMul_10223 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.6.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2154,/encoder/tp_encoders.6/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.6/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2155,/encoder/tp_encoders.6/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.6/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.6/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2156,/encoder/tp_encoders.6/Add,Eltwise_Binary,"/encoder/tp_encoders.5/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.6/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2157,/encoder/tp_encoders.6/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.6/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10224 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10225 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2158,/encoder/tp_encoders.6/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.6/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2159,/encoder/tp_encoders.6/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.6/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.6.feed_forward.w_1.bias
,,,"onnx::MatMul_10226 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.6.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2160,/encoder/tp_encoders.6/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.6/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.6/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2161,/encoder/tp_encoders.6/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.6/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.6/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2162,/encoder/tp_encoders.6/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.6/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.6/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2163,/encoder/tp_encoders.6/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.6/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.6/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.6.feed_forward.w_2.bias
,,,"onnx::MatMul_10227 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.6.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2164,/encoder/tp_encoders.6/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.6/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2165,/encoder/tp_encoders.6/Add_1,Eltwise_Binary,"/encoder/tp_encoders.6/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.6/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.6/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2166,/encoder/tp_encoders.7/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.6/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10228 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10229 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2167,/encoder/tp_encoders.7/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.7/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2168,/encoder/tp_encoders.7/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.7/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.7.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10230 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.7.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2169,/encoder/tp_encoders.7/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.7/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2170,/encoder/tp_encoders.7/self_attn/Split,Split,"/encoder/tp_encoders.7/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.7/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.7/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2171,/encoder/tp_encoders.7/self_attn/Reshape,Reshape,"/encoder/tp_encoders.7/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2172,/encoder/tp_encoders.7/self_attn/Transpose,Transpose,"/encoder/tp_encoders.7/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2173,/encoder/tp_encoders.7/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.7/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2174,/encoder/tp_encoders.7/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.7/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2175,/encoder/tp_encoders.7/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.7/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2176,/encoder/tp_encoders.7/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.7/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2177,/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.7/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2178,/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2179,/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.7.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2180,/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2181,/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2182,/encoder/tp_encoders.7/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.7/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2183,/encoder/tp_encoders.7/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.7/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.7/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2184,/encoder/tp_encoders.7/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.7/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2185,/encoder/tp_encoders.7/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.7/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2186,/encoder/tp_encoders.7/self_attn/MatMul,MatMul,"/encoder/tp_encoders.7/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.7/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2187,/encoder/tp_encoders.7/self_attn/Softmax,Softmax,"/encoder/tp_encoders.7/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2188,/encoder/tp_encoders.7/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.7/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.7/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2189,/encoder/tp_encoders.7/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.7/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2190,/encoder/tp_encoders.7/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.7/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2191,/encoder/tp_encoders.7/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.7/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.7.self_attn.linear_out.bias
,,,"onnx::MatMul_10244 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.7.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2192,/encoder/tp_encoders.7/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.7/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2193,/encoder/tp_encoders.7/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.7/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.7/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2194,/encoder/tp_encoders.7/Add,Eltwise_Binary,"/encoder/tp_encoders.6/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.7/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2195,/encoder/tp_encoders.7/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.7/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10245 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10246 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2196,/encoder/tp_encoders.7/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.7/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2197,/encoder/tp_encoders.7/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.7/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.7.feed_forward.w_1.bias
,,,"onnx::MatMul_10247 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.7.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2198,/encoder/tp_encoders.7/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.7/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.7/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2199,/encoder/tp_encoders.7/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.7/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.7/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2200,/encoder/tp_encoders.7/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.7/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.7/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2201,/encoder/tp_encoders.7/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.7/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.7/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.7.feed_forward.w_2.bias
,,,"onnx::MatMul_10248 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.7.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2202,/encoder/tp_encoders.7/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.7/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2203,/encoder/tp_encoders.7/Add_1,Eltwise_Binary,"/encoder/tp_encoders.7/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.7/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.7/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2204,/encoder/tp_encoders.8/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.7/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10249 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10250 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2205,/encoder/tp_encoders.8/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.8/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2206,/encoder/tp_encoders.8/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.8/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.8.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10251 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.8.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2207,/encoder/tp_encoders.8/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.8/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2208,/encoder/tp_encoders.8/self_attn/Split,Split,"/encoder/tp_encoders.8/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.8/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.8/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2209,/encoder/tp_encoders.8/self_attn/Reshape,Reshape,"/encoder/tp_encoders.8/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2210,/encoder/tp_encoders.8/self_attn/Transpose,Transpose,"/encoder/tp_encoders.8/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2211,/encoder/tp_encoders.8/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.8/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2212,/encoder/tp_encoders.8/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.8/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2213,/encoder/tp_encoders.8/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.8/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2214,/encoder/tp_encoders.8/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.8/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2215,/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.8/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2216,/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2217,/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.8.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2218,/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2219,/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2220,/encoder/tp_encoders.8/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.8/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2221,/encoder/tp_encoders.8/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.8/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.8/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2222,/encoder/tp_encoders.8/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.8/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2223,/encoder/tp_encoders.8/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.8/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2224,/encoder/tp_encoders.8/self_attn/MatMul,MatMul,"/encoder/tp_encoders.8/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.8/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2225,/encoder/tp_encoders.8/self_attn/Softmax,Softmax,"/encoder/tp_encoders.8/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2226,/encoder/tp_encoders.8/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.8/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.8/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2227,/encoder/tp_encoders.8/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.8/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2228,/encoder/tp_encoders.8/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.8/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2229,/encoder/tp_encoders.8/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.8/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.8.self_attn.linear_out.bias
,,,"onnx::MatMul_10265 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.8.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2230,/encoder/tp_encoders.8/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.8/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2231,/encoder/tp_encoders.8/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.8/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.8/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2232,/encoder/tp_encoders.8/Add,Eltwise_Binary,"/encoder/tp_encoders.7/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.8/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2233,/encoder/tp_encoders.8/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.8/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10266 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10267 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2234,/encoder/tp_encoders.8/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.8/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2235,/encoder/tp_encoders.8/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.8/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.8.feed_forward.w_1.bias
,,,"onnx::MatMul_10268 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.8.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2236,/encoder/tp_encoders.8/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.8/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.8/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2237,/encoder/tp_encoders.8/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.8/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.8/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2238,/encoder/tp_encoders.8/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.8/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.8/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2239,/encoder/tp_encoders.8/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.8/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.8/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.8.feed_forward.w_2.bias
,,,"onnx::MatMul_10269 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.8.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2240,/encoder/tp_encoders.8/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.8/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2241,/encoder/tp_encoders.8/Add_1,Eltwise_Binary,"/encoder/tp_encoders.8/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.8/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.8/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2242,/encoder/tp_encoders.9/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.8/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10270 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10271 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2243,/encoder/tp_encoders.9/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.9/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2244,/encoder/tp_encoders.9/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.9/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.9.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10272 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.9.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2245,/encoder/tp_encoders.9/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.9/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2246,/encoder/tp_encoders.9/self_attn/Split,Split,"/encoder/tp_encoders.9/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.9/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.9/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2247,/encoder/tp_encoders.9/self_attn/Reshape,Reshape,"/encoder/tp_encoders.9/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2248,/encoder/tp_encoders.9/self_attn/Transpose,Transpose,"/encoder/tp_encoders.9/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2249,/encoder/tp_encoders.9/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.9/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2250,/encoder/tp_encoders.9/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.9/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2251,/encoder/tp_encoders.9/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.9/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2252,/encoder/tp_encoders.9/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.9/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2253,/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.9/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2254,/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2255,/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.9.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2256,/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2257,/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2258,/encoder/tp_encoders.9/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.9/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2259,/encoder/tp_encoders.9/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.9/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.9/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2260,/encoder/tp_encoders.9/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.9/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2261,/encoder/tp_encoders.9/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.9/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2262,/encoder/tp_encoders.9/self_attn/MatMul,MatMul,"/encoder/tp_encoders.9/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.9/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2263,/encoder/tp_encoders.9/self_attn/Softmax,Softmax,"/encoder/tp_encoders.9/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2264,/encoder/tp_encoders.9/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.9/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.9/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2265,/encoder/tp_encoders.9/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.9/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2266,/encoder/tp_encoders.9/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.9/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2267,/encoder/tp_encoders.9/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.9/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.9.self_attn.linear_out.bias
,,,"onnx::MatMul_10286 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.9.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2268,/encoder/tp_encoders.9/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.9/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2269,/encoder/tp_encoders.9/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.9/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.9/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2270,/encoder/tp_encoders.9/Add,Eltwise_Binary,"/encoder/tp_encoders.8/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.9/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2271,/encoder/tp_encoders.9/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.9/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10287 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10288 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2272,/encoder/tp_encoders.9/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.9/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2273,/encoder/tp_encoders.9/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.9/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.9.feed_forward.w_1.bias
,,,"onnx::MatMul_10289 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.9.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2274,/encoder/tp_encoders.9/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.9/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.9/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2275,/encoder/tp_encoders.9/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.9/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.9/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2276,/encoder/tp_encoders.9/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.9/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.9/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2277,/encoder/tp_encoders.9/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.9/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.9/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.9.feed_forward.w_2.bias
,,,"onnx::MatMul_10290 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.9.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2278,/encoder/tp_encoders.9/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.9/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2279,/encoder/tp_encoders.9/Add_1,Eltwise_Binary,"/encoder/tp_encoders.9/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.9/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.9/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2280,/encoder/tp_encoders.10/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.9/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10291 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10292 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2281,/encoder/tp_encoders.10/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.10/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2282,/encoder/tp_encoders.10/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.10/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.10.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10293 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.10.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2283,/encoder/tp_encoders.10/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.10/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2284,/encoder/tp_encoders.10/self_attn/Split,Split,"/encoder/tp_encoders.10/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.10/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.10/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2285,/encoder/tp_encoders.10/self_attn/Reshape,Reshape,"/encoder/tp_encoders.10/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2286,/encoder/tp_encoders.10/self_attn/Transpose,Transpose,"/encoder/tp_encoders.10/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2287,/encoder/tp_encoders.10/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.10/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2288,/encoder/tp_encoders.10/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.10/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2289,/encoder/tp_encoders.10/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.10/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2290,/encoder/tp_encoders.10/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.10/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2291,/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.10/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2292,/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2293,/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.10.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2294,/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2295,/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2296,/encoder/tp_encoders.10/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.10/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2297,/encoder/tp_encoders.10/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.10/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.10/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2298,/encoder/tp_encoders.10/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.10/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2299,/encoder/tp_encoders.10/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.10/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2300,/encoder/tp_encoders.10/self_attn/MatMul,MatMul,"/encoder/tp_encoders.10/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.10/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2301,/encoder/tp_encoders.10/self_attn/Softmax,Softmax,"/encoder/tp_encoders.10/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2302,/encoder/tp_encoders.10/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.10/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.10/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2303,/encoder/tp_encoders.10/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.10/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2304,/encoder/tp_encoders.10/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.10/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2305,/encoder/tp_encoders.10/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.10/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.10.self_attn.linear_out.bias
,,,"onnx::MatMul_10307 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.10.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2306,/encoder/tp_encoders.10/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.10/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2307,/encoder/tp_encoders.10/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.10/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.10/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2308,/encoder/tp_encoders.10/Add,Eltwise_Binary,"/encoder/tp_encoders.9/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.10/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2309,/encoder/tp_encoders.10/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.10/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10308 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10309 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2310,/encoder/tp_encoders.10/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.10/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2311,/encoder/tp_encoders.10/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.10/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.10.feed_forward.w_1.bias
,,,"onnx::MatMul_10310 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.10.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2312,/encoder/tp_encoders.10/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.10/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.10/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2313,/encoder/tp_encoders.10/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.10/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.10/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2314,/encoder/tp_encoders.10/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.10/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.10/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2315,/encoder/tp_encoders.10/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.10/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.10/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.10.feed_forward.w_2.bias
,,,"onnx::MatMul_10311 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.10.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2316,/encoder/tp_encoders.10/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.10/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2317,/encoder/tp_encoders.10/Add_1,Eltwise_Binary,"/encoder/tp_encoders.10/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.10/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.10/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2318,/encoder/tp_encoders.11/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.10/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10312 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10313 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2319,/encoder/tp_encoders.11/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.11/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2320,/encoder/tp_encoders.11/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.11/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.11.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10314 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.11.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2321,/encoder/tp_encoders.11/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.11/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2322,/encoder/tp_encoders.11/self_attn/Split,Split,"/encoder/tp_encoders.11/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.11/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.11/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2323,/encoder/tp_encoders.11/self_attn/Reshape,Reshape,"/encoder/tp_encoders.11/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2324,/encoder/tp_encoders.11/self_attn/Transpose,Transpose,"/encoder/tp_encoders.11/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2325,/encoder/tp_encoders.11/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.11/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2326,/encoder/tp_encoders.11/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.11/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2327,/encoder/tp_encoders.11/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.11/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2328,/encoder/tp_encoders.11/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.11/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2329,/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.11/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2330,/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2331,/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.11.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2332,/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2333,/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2334,/encoder/tp_encoders.11/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.11/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2335,/encoder/tp_encoders.11/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.11/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.11/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2336,/encoder/tp_encoders.11/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.11/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2337,/encoder/tp_encoders.11/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.11/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2338,/encoder/tp_encoders.11/self_attn/MatMul,MatMul,"/encoder/tp_encoders.11/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.11/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2339,/encoder/tp_encoders.11/self_attn/Softmax,Softmax,"/encoder/tp_encoders.11/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2340,/encoder/tp_encoders.11/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.11/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.11/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2341,/encoder/tp_encoders.11/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.11/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2342,/encoder/tp_encoders.11/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.11/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2343,/encoder/tp_encoders.11/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.11/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.11.self_attn.linear_out.bias
,,,"onnx::MatMul_10328 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.11.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2344,/encoder/tp_encoders.11/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.11/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2345,/encoder/tp_encoders.11/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.11/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.11/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2346,/encoder/tp_encoders.11/Add,Eltwise_Binary,"/encoder/tp_encoders.10/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.11/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2347,/encoder/tp_encoders.11/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.11/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10329 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10330 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2348,/encoder/tp_encoders.11/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.11/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2349,/encoder/tp_encoders.11/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.11/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.11.feed_forward.w_1.bias
,,,"onnx::MatMul_10331 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.11.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2350,/encoder/tp_encoders.11/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.11/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.11/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2351,/encoder/tp_encoders.11/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.11/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.11/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2352,/encoder/tp_encoders.11/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.11/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.11/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2353,/encoder/tp_encoders.11/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.11/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.11/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.11.feed_forward.w_2.bias
,,,"onnx::MatMul_10332 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.11.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2354,/encoder/tp_encoders.11/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.11/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2355,/encoder/tp_encoders.11/Add_1,Eltwise_Binary,"/encoder/tp_encoders.11/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.11/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.11/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2356,/encoder/tp_encoders.12/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.11/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10333 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10334 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2357,/encoder/tp_encoders.12/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.12/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2358,/encoder/tp_encoders.12/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.12/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.12.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10335 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.12.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2359,/encoder/tp_encoders.12/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.12/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2360,/encoder/tp_encoders.12/self_attn/Split,Split,"/encoder/tp_encoders.12/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.12/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.12/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2361,/encoder/tp_encoders.12/self_attn/Reshape,Reshape,"/encoder/tp_encoders.12/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2362,/encoder/tp_encoders.12/self_attn/Transpose,Transpose,"/encoder/tp_encoders.12/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2363,/encoder/tp_encoders.12/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.12/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2364,/encoder/tp_encoders.12/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.12/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2365,/encoder/tp_encoders.12/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.12/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2366,/encoder/tp_encoders.12/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.12/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2367,/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.12/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2368,/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2369,/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.12.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2370,/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2371,/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2372,/encoder/tp_encoders.12/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.12/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2373,/encoder/tp_encoders.12/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.12/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.12/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2374,/encoder/tp_encoders.12/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.12/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2375,/encoder/tp_encoders.12/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.12/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2376,/encoder/tp_encoders.12/self_attn/MatMul,MatMul,"/encoder/tp_encoders.12/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.12/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2377,/encoder/tp_encoders.12/self_attn/Softmax,Softmax,"/encoder/tp_encoders.12/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2378,/encoder/tp_encoders.12/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.12/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.12/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2379,/encoder/tp_encoders.12/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.12/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2380,/encoder/tp_encoders.12/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.12/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2381,/encoder/tp_encoders.12/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.12/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.12.self_attn.linear_out.bias
,,,"onnx::MatMul_10349 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.12.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2382,/encoder/tp_encoders.12/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.12/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2383,/encoder/tp_encoders.12/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.12/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.12/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2384,/encoder/tp_encoders.12/Add,Eltwise_Binary,"/encoder/tp_encoders.11/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.12/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2385,/encoder/tp_encoders.12/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.12/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10350 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10351 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2386,/encoder/tp_encoders.12/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.12/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2387,/encoder/tp_encoders.12/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.12/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.12.feed_forward.w_1.bias
,,,"onnx::MatMul_10352 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.12.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2388,/encoder/tp_encoders.12/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.12/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.12/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2389,/encoder/tp_encoders.12/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.12/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.12/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2390,/encoder/tp_encoders.12/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.12/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.12/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2391,/encoder/tp_encoders.12/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.12/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.12/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.12.feed_forward.w_2.bias
,,,"onnx::MatMul_10353 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.12.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2392,/encoder/tp_encoders.12/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.12/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2393,/encoder/tp_encoders.12/Add_1,Eltwise_Binary,"/encoder/tp_encoders.12/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.12/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.12/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2394,/encoder/tp_encoders.13/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.12/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10354 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10355 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2395,/encoder/tp_encoders.13/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.13/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2396,/encoder/tp_encoders.13/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.13/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.13.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10356 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.13.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2397,/encoder/tp_encoders.13/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.13/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2398,/encoder/tp_encoders.13/self_attn/Split,Split,"/encoder/tp_encoders.13/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.13/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.13/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2399,/encoder/tp_encoders.13/self_attn/Reshape,Reshape,"/encoder/tp_encoders.13/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2400,/encoder/tp_encoders.13/self_attn/Transpose,Transpose,"/encoder/tp_encoders.13/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2401,/encoder/tp_encoders.13/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.13/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2402,/encoder/tp_encoders.13/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.13/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2403,/encoder/tp_encoders.13/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.13/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2404,/encoder/tp_encoders.13/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.13/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2405,/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.13/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2406,/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2407,/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.13.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2408,/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2409,/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2410,/encoder/tp_encoders.13/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.13/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2411,/encoder/tp_encoders.13/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.13/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.13/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2412,/encoder/tp_encoders.13/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.13/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2413,/encoder/tp_encoders.13/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.13/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2414,/encoder/tp_encoders.13/self_attn/MatMul,MatMul,"/encoder/tp_encoders.13/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.13/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2415,/encoder/tp_encoders.13/self_attn/Softmax,Softmax,"/encoder/tp_encoders.13/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2416,/encoder/tp_encoders.13/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.13/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.13/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2417,/encoder/tp_encoders.13/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.13/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2418,/encoder/tp_encoders.13/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.13/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2419,/encoder/tp_encoders.13/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.13/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.13.self_attn.linear_out.bias
,,,"onnx::MatMul_10370 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.13.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2420,/encoder/tp_encoders.13/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.13/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2421,/encoder/tp_encoders.13/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.13/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.13/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2422,/encoder/tp_encoders.13/Add,Eltwise_Binary,"/encoder/tp_encoders.12/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.13/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2423,/encoder/tp_encoders.13/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.13/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10371 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10372 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2424,/encoder/tp_encoders.13/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.13/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2425,/encoder/tp_encoders.13/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.13/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.13.feed_forward.w_1.bias
,,,"onnx::MatMul_10373 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.13.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2426,/encoder/tp_encoders.13/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.13/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.13/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2427,/encoder/tp_encoders.13/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.13/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.13/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2428,/encoder/tp_encoders.13/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.13/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.13/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2429,/encoder/tp_encoders.13/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.13/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.13/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.13.feed_forward.w_2.bias
,,,"onnx::MatMul_10374 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.13.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2430,/encoder/tp_encoders.13/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.13/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2431,/encoder/tp_encoders.13/Add_1,Eltwise_Binary,"/encoder/tp_encoders.13/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.13/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.13/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2432,/encoder/tp_encoders.14/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.13/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10375 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10376 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2433,/encoder/tp_encoders.14/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.14/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2434,/encoder/tp_encoders.14/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.14/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.14.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10377 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.14.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2435,/encoder/tp_encoders.14/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.14/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2436,/encoder/tp_encoders.14/self_attn/Split,Split,"/encoder/tp_encoders.14/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.14/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.14/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2437,/encoder/tp_encoders.14/self_attn/Reshape,Reshape,"/encoder/tp_encoders.14/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2438,/encoder/tp_encoders.14/self_attn/Transpose,Transpose,"/encoder/tp_encoders.14/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2439,/encoder/tp_encoders.14/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.14/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2440,/encoder/tp_encoders.14/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.14/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2441,/encoder/tp_encoders.14/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.14/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2442,/encoder/tp_encoders.14/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.14/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2443,/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.14/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2444,/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2445,/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.14.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2446,/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2447,/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2448,/encoder/tp_encoders.14/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.14/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2449,/encoder/tp_encoders.14/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.14/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.14/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2450,/encoder/tp_encoders.14/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.14/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2451,/encoder/tp_encoders.14/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.14/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2452,/encoder/tp_encoders.14/self_attn/MatMul,MatMul,"/encoder/tp_encoders.14/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.14/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2453,/encoder/tp_encoders.14/self_attn/Softmax,Softmax,"/encoder/tp_encoders.14/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2454,/encoder/tp_encoders.14/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.14/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.14/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2455,/encoder/tp_encoders.14/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.14/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2456,/encoder/tp_encoders.14/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.14/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2457,/encoder/tp_encoders.14/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.14/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.14.self_attn.linear_out.bias
,,,"onnx::MatMul_10391 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.14.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2458,/encoder/tp_encoders.14/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.14/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2459,/encoder/tp_encoders.14/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.14/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.14/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2460,/encoder/tp_encoders.14/Add,Eltwise_Binary,"/encoder/tp_encoders.13/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.14/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2461,/encoder/tp_encoders.14/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.14/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10392 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10393 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2462,/encoder/tp_encoders.14/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.14/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2463,/encoder/tp_encoders.14/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.14/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.14.feed_forward.w_1.bias
,,,"onnx::MatMul_10394 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.14.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2464,/encoder/tp_encoders.14/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.14/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.14/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2465,/encoder/tp_encoders.14/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.14/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.14/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2466,/encoder/tp_encoders.14/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.14/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.14/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2467,/encoder/tp_encoders.14/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.14/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.14/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.14.feed_forward.w_2.bias
,,,"onnx::MatMul_10395 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.14.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2468,/encoder/tp_encoders.14/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.14/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2469,/encoder/tp_encoders.14/Add_1,Eltwise_Binary,"/encoder/tp_encoders.14/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.14/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.14/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2470,/encoder/tp_encoders.15/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.14/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10396 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10397 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2471,/encoder/tp_encoders.15/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.15/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2472,/encoder/tp_encoders.15/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.15/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.15.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10398 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.15.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2473,/encoder/tp_encoders.15/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.15/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2474,/encoder/tp_encoders.15/self_attn/Split,Split,"/encoder/tp_encoders.15/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.15/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.15/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2475,/encoder/tp_encoders.15/self_attn/Reshape,Reshape,"/encoder/tp_encoders.15/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2476,/encoder/tp_encoders.15/self_attn/Transpose,Transpose,"/encoder/tp_encoders.15/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2477,/encoder/tp_encoders.15/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.15/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2478,/encoder/tp_encoders.15/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.15/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2479,/encoder/tp_encoders.15/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.15/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2480,/encoder/tp_encoders.15/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.15/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2481,/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.15/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2482,/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2483,/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.15.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2484,/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2485,/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2486,/encoder/tp_encoders.15/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.15/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2487,/encoder/tp_encoders.15/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.15/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.15/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2488,/encoder/tp_encoders.15/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.15/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2489,/encoder/tp_encoders.15/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.15/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2490,/encoder/tp_encoders.15/self_attn/MatMul,MatMul,"/encoder/tp_encoders.15/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.15/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2491,/encoder/tp_encoders.15/self_attn/Softmax,Softmax,"/encoder/tp_encoders.15/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2492,/encoder/tp_encoders.15/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.15/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.15/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2493,/encoder/tp_encoders.15/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.15/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2494,/encoder/tp_encoders.15/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.15/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2495,/encoder/tp_encoders.15/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.15/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.15.self_attn.linear_out.bias
,,,"onnx::MatMul_10412 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.15.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2496,/encoder/tp_encoders.15/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.15/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2497,/encoder/tp_encoders.15/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.15/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.15/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2498,/encoder/tp_encoders.15/Add,Eltwise_Binary,"/encoder/tp_encoders.14/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.15/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2499,/encoder/tp_encoders.15/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.15/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10413 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10414 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2500,/encoder/tp_encoders.15/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.15/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2501,/encoder/tp_encoders.15/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.15/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.15.feed_forward.w_1.bias
,,,"onnx::MatMul_10415 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.15.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2502,/encoder/tp_encoders.15/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.15/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.15/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2503,/encoder/tp_encoders.15/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.15/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.15/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2504,/encoder/tp_encoders.15/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.15/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.15/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2505,/encoder/tp_encoders.15/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.15/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.15/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.15.feed_forward.w_2.bias
,,,"onnx::MatMul_10416 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.15.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2506,/encoder/tp_encoders.15/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.15/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2507,/encoder/tp_encoders.15/Add_1,Eltwise_Binary,"/encoder/tp_encoders.15/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.15/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.15/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2508,/encoder/tp_encoders.16/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.15/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10417 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10418 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2509,/encoder/tp_encoders.16/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.16/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2510,/encoder/tp_encoders.16/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.16/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.16.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10419 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.16.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2511,/encoder/tp_encoders.16/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.16/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2512,/encoder/tp_encoders.16/self_attn/Split,Split,"/encoder/tp_encoders.16/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.16/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.16/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2513,/encoder/tp_encoders.16/self_attn/Reshape,Reshape,"/encoder/tp_encoders.16/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2514,/encoder/tp_encoders.16/self_attn/Transpose,Transpose,"/encoder/tp_encoders.16/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2515,/encoder/tp_encoders.16/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.16/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2516,/encoder/tp_encoders.16/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.16/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2517,/encoder/tp_encoders.16/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.16/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2518,/encoder/tp_encoders.16/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.16/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2519,/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.16/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2520,/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2521,/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.16.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2522,/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2523,/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2524,/encoder/tp_encoders.16/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.16/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2525,/encoder/tp_encoders.16/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.16/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.16/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2526,/encoder/tp_encoders.16/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.16/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2527,/encoder/tp_encoders.16/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.16/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2528,/encoder/tp_encoders.16/self_attn/MatMul,MatMul,"/encoder/tp_encoders.16/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.16/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2529,/encoder/tp_encoders.16/self_attn/Softmax,Softmax,"/encoder/tp_encoders.16/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2530,/encoder/tp_encoders.16/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.16/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.16/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2531,/encoder/tp_encoders.16/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.16/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2532,/encoder/tp_encoders.16/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.16/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2533,/encoder/tp_encoders.16/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.16/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.16.self_attn.linear_out.bias
,,,"onnx::MatMul_10433 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.16.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2534,/encoder/tp_encoders.16/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.16/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2535,/encoder/tp_encoders.16/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.16/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.16/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2536,/encoder/tp_encoders.16/Add,Eltwise_Binary,"/encoder/tp_encoders.15/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.16/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2537,/encoder/tp_encoders.16/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.16/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10434 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10435 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2538,/encoder/tp_encoders.16/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.16/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2539,/encoder/tp_encoders.16/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.16/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.16.feed_forward.w_1.bias
,,,"onnx::MatMul_10436 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.16.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2540,/encoder/tp_encoders.16/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.16/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.16/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2541,/encoder/tp_encoders.16/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.16/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.16/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2542,/encoder/tp_encoders.16/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.16/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.16/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2543,/encoder/tp_encoders.16/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.16/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.16/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.16.feed_forward.w_2.bias
,,,"onnx::MatMul_10437 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.16.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2544,/encoder/tp_encoders.16/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.16/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2545,/encoder/tp_encoders.16/Add_1,Eltwise_Binary,"/encoder/tp_encoders.16/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.16/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.16/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2546,/encoder/tp_encoders.17/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.16/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10438 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10439 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2547,/encoder/tp_encoders.17/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.17/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2548,/encoder/tp_encoders.17/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.17/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.17.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10440 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.17.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2549,/encoder/tp_encoders.17/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.17/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2550,/encoder/tp_encoders.17/self_attn/Split,Split,"/encoder/tp_encoders.17/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.17/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.17/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2551,/encoder/tp_encoders.17/self_attn/Reshape,Reshape,"/encoder/tp_encoders.17/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2552,/encoder/tp_encoders.17/self_attn/Transpose,Transpose,"/encoder/tp_encoders.17/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2553,/encoder/tp_encoders.17/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.17/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2554,/encoder/tp_encoders.17/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.17/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2555,/encoder/tp_encoders.17/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.17/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2556,/encoder/tp_encoders.17/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.17/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2557,/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.17/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2558,/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2559,/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.17.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2560,/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2561,/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2562,/encoder/tp_encoders.17/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.17/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2563,/encoder/tp_encoders.17/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.17/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.17/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2564,/encoder/tp_encoders.17/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.17/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2565,/encoder/tp_encoders.17/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.17/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2566,/encoder/tp_encoders.17/self_attn/MatMul,MatMul,"/encoder/tp_encoders.17/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.17/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2567,/encoder/tp_encoders.17/self_attn/Softmax,Softmax,"/encoder/tp_encoders.17/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2568,/encoder/tp_encoders.17/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.17/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.17/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2569,/encoder/tp_encoders.17/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.17/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2570,/encoder/tp_encoders.17/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.17/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2571,/encoder/tp_encoders.17/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.17/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.17.self_attn.linear_out.bias
,,,"onnx::MatMul_10454 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.17.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2572,/encoder/tp_encoders.17/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.17/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2573,/encoder/tp_encoders.17/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.17/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.17/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2574,/encoder/tp_encoders.17/Add,Eltwise_Binary,"/encoder/tp_encoders.16/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.17/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2575,/encoder/tp_encoders.17/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.17/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10455 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10456 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2576,/encoder/tp_encoders.17/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.17/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2577,/encoder/tp_encoders.17/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.17/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.17.feed_forward.w_1.bias
,,,"onnx::MatMul_10457 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.17.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2578,/encoder/tp_encoders.17/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.17/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.17/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2579,/encoder/tp_encoders.17/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.17/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.17/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2580,/encoder/tp_encoders.17/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.17/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.17/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2581,/encoder/tp_encoders.17/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.17/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.17/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.17.feed_forward.w_2.bias
,,,"onnx::MatMul_10458 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.17.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2582,/encoder/tp_encoders.17/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.17/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2583,/encoder/tp_encoders.17/Add_1,Eltwise_Binary,"/encoder/tp_encoders.17/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.17/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.17/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2584,/encoder/tp_encoders.18/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.17/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10459 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10460 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2585,/encoder/tp_encoders.18/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.18/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2586,/encoder/tp_encoders.18/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.18/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.18.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10461 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.18.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2587,/encoder/tp_encoders.18/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.18/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2588,/encoder/tp_encoders.18/self_attn/Split,Split,"/encoder/tp_encoders.18/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.18/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.18/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2589,/encoder/tp_encoders.18/self_attn/Reshape,Reshape,"/encoder/tp_encoders.18/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2590,/encoder/tp_encoders.18/self_attn/Transpose,Transpose,"/encoder/tp_encoders.18/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2591,/encoder/tp_encoders.18/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.18/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2592,/encoder/tp_encoders.18/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.18/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2593,/encoder/tp_encoders.18/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.18/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2594,/encoder/tp_encoders.18/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.18/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2595,/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.18/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2596,/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2597,/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.18.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2598,/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2599,/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2600,/encoder/tp_encoders.18/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.18/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2601,/encoder/tp_encoders.18/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.18/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.18/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2602,/encoder/tp_encoders.18/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.18/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2603,/encoder/tp_encoders.18/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.18/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2604,/encoder/tp_encoders.18/self_attn/MatMul,MatMul,"/encoder/tp_encoders.18/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.18/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2605,/encoder/tp_encoders.18/self_attn/Softmax,Softmax,"/encoder/tp_encoders.18/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2606,/encoder/tp_encoders.18/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.18/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.18/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2607,/encoder/tp_encoders.18/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.18/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2608,/encoder/tp_encoders.18/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.18/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2609,/encoder/tp_encoders.18/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.18/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.18.self_attn.linear_out.bias
,,,"onnx::MatMul_10475 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.18.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2610,/encoder/tp_encoders.18/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.18/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2611,/encoder/tp_encoders.18/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.18/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.18/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2612,/encoder/tp_encoders.18/Add,Eltwise_Binary,"/encoder/tp_encoders.17/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.18/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2613,/encoder/tp_encoders.18/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.18/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10476 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10477 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2614,/encoder/tp_encoders.18/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.18/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2615,/encoder/tp_encoders.18/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.18/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.18.feed_forward.w_1.bias
,,,"onnx::MatMul_10478 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.18.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2616,/encoder/tp_encoders.18/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.18/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.18/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2617,/encoder/tp_encoders.18/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.18/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.18/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2618,/encoder/tp_encoders.18/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.18/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.18/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2619,/encoder/tp_encoders.18/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.18/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.18/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.18.feed_forward.w_2.bias
,,,"onnx::MatMul_10479 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.18.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2620,/encoder/tp_encoders.18/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.18/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2621,/encoder/tp_encoders.18/Add_1,Eltwise_Binary,"/encoder/tp_encoders.18/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.18/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.18/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2622,/encoder/tp_encoders.19/norm1/LayerNormalization,LayerNorm,"/encoder/tp_encoders.18/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10480 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10481 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2623,/encoder/tp_encoders.19/self_attn/linear_q_k_v/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.19/norm1/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2624,/encoder/tp_encoders.19/self_attn/linear_q_k_v/MatMul,FullyConnected,"/encoder/tp_encoders.19/self_attn/linear_q_k_v/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)",97x1536,A D G C,bias_op_name: encoder.tp_encoders.19.self_attn.linear_q_k_v.bias
,,,"onnx::MatMul_10482 (data type: Float_32; tensor dimension: [1536,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.19.self_attn.linear_q_k_v.bias (data type: Float_32; tensor dimension: [1536]; tensor type: STATIC),,,,param count: 787k (0.341%)
,,,,,,,MACs per inference: 786k (0.295%)
2625,/encoder/tp_encoders.19/self_attn/linear_q_k_v/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.19/self_attn/linear_q_k_v/Add_output_0_fc (data type: Float_32; tensor dimension: [97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)",1x97x1536,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 1536]"
2626,/encoder/tp_encoders.19/self_attn/Split,Split,"/encoder/tp_encoders.19/self_attn/linear_q_k_v/Add_output_0 (data type: Float_32; tensor dimension: [1,97,1536]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axis: 2
,,,,"/encoder/tp_encoders.19/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,packageName: qti.aisw
,,,,"/encoder/tp_encoders.19/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,,"split_index: [512, 1024]"
2627,/encoder/tp_encoders.19/self_attn/Reshape,Reshape,"/encoder/tp_encoders.19/self_attn/Split_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2628,/encoder/tp_encoders.19/self_attn/Transpose,Transpose,"/encoder/tp_encoders.19/self_attn/Reshape_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2629,/encoder/tp_encoders.19/self_attn/Reshape_1,Reshape,"/encoder/tp_encoders.19/self_attn/Split_output_1 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2630,/encoder/tp_encoders.19/self_attn/Reshape_2,Reshape,"/encoder/tp_encoders.19/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 4, 128]"
2631,/encoder/tp_encoders.19/self_attn/Transpose_1,Transpose,"/encoder/tp_encoders.19/self_attn/Reshape_2_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2632,/encoder/tp_encoders.19/self_attn/Transpose_2,Transpose,"/encoder/tp_encoders.19/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2633,/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d,Reshape,"/encoder/tp_encoders.19/self_attn/Transpose_2_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 1, 97]"
2634,/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc,Transpose,"/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2635,/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_2d,DepthWiseConv2d,"/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_reshape_to_2d.nhwc (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)",1x1x97x512,A D G C,bias_op_name: 
,,,"encoder.tp_encoders.19.self_attn.fsmn_block.weight (data type: Float_32; tensor dimension: [1,11,1,512]; tensor type: STATIC)",,,,"dilation: [1, 1]"
,,,,,,,packageName: qti.aisw
,,,,,,,"pad_amount: [[0, 0], [5, 5]]"
,,,,,,,padding_size_strategy: 5
,,,,,,,"stride: [1, 1]"
2636,/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_intermediate.nchw,Transpose,"/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_intermediate (data type: Float_32; tensor dimension: [1,1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)",1x512x1x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 3, 1, 2]"
2637,/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_intermediate,Reshape,"/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_intermediate.nchw (data type: Float_32; tensor dimension: [1,512,1,97]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)",1x512x97,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 512, 97]"
2638,/encoder/tp_encoders.19/self_attn/Transpose_3,Transpose,"/encoder/tp_encoders.19/self_attn/fsmn_block/Conv_output_0 (data type: Float_32; tensor dimension: [1,512,97]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
2639,/encoder/tp_encoders.19/self_attn/Add,Eltwise_Binary,"/encoder/tp_encoders.19/self_attn/Transpose_3_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.19/self_attn/Split_output_2 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2640,/encoder/tp_encoders.19/self_attn/Mul,Eltwise_Binary,"/encoder/tp_encoders.19/self_attn/Transpose_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,operation: 13
,,,/encoder/encoders0.0/self_attn/Constant_9_output_0 (data type: Float_32; tensor dimension: [1]; tensor type: STATIC),,,,packageName: qti.aisw
2641,/encoder/tp_encoders.19/self_attn/Transpose_4,Transpose,"/encoder/tp_encoders.19/self_attn/Reshape_1_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",1x4x128x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 3, 1]"
2642,/encoder/tp_encoders.19/self_attn/MatMul,MatMul,"/encoder/tp_encoders.19/self_attn/Mul_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.19/self_attn/Transpose_4_output_0 (data type: Float_32; tensor dimension: [1,4,128,97]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 2k (0.000769%)
2643,/encoder/tp_encoders.19/self_attn/Softmax,Softmax,"/encoder/tp_encoders.19/self_attn/MatMul_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)",1x4x97x97,A D G C,axis: 3
,,,,,,,beta: 1
,,,,,,,packageName: qti.aisw
2644,/encoder/tp_encoders.19/self_attn/MatMul_1,MatMul,"/encoder/tp_encoders.19/self_attn/Softmax_output_0 (data type: Float_32; tensor dimension: [1,4,97,97]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",1x4x97x128,A D G C,packageName: qti.aisw
,,,"/encoder/tp_encoders.19/self_attn/Transpose_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)",,,,transpose_in0: False
,,,,,,,transpose_in1: False
,,,,,,,MACs per inference: 1k (0.000583%)
2645,/encoder/tp_encoders.19/self_attn/Transpose_5,Transpose,"/encoder/tp_encoders.19/self_attn/MatMul_1_output_0 (data type: Float_32; tensor dimension: [1,4,97,128]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)",1x97x4x128,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1, 3]"
2646,/encoder/tp_encoders.19/self_attn/Reshape_3,Reshape,"/encoder/tp_encoders.19/self_attn/Transpose_5_output_0 (data type: Float_32; tensor dimension: [1,97,4,128]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2647,/encoder/tp_encoders.19/self_attn/linear_out/MatMul,FullyConnected,"/encoder/tp_encoders.19/self_attn/linear_out/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.19.self_attn.linear_out.bias
,,,"onnx::MatMul_10496 (data type: Float_32; tensor dimension: [512,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.19.self_attn.linear_out.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 262k (0.114%)
,,,,,,,MACs per inference: 262k (0.0984%)
2648,/encoder/tp_encoders.19/self_attn/linear_out/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.19/self_attn/linear_out/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2649,/encoder/tp_encoders.19/self_attn/Add_1,Eltwise_Binary,"/encoder/tp_encoders.19/self_attn/linear_out/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.19/self_attn/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2650,/encoder/tp_encoders.19/Add,Eltwise_Binary,"/encoder/tp_encoders.18/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.19/self_attn/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2651,/encoder/tp_encoders.19/norm2/LayerNormalization,LayerNorm,"/encoder/tp_encoders.19/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10497 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10498 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2652,/encoder/tp_encoders.19/feed_forward/w_1/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.19/norm2/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2653,/encoder/tp_encoders.19/feed_forward/w_1/MatMul,FullyConnected,"/encoder/tp_encoders.19/feed_forward/w_1/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,bias_op_name: encoder.tp_encoders.19.feed_forward.w_1.bias
,,,"onnx::MatMul_10499 (data type: Float_32; tensor dimension: [2048,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.19.feed_forward.w_1.bias (data type: Float_32; tensor dimension: [2048]; tensor type: STATIC),,,,param count: 1M (0.454%)
,,,,,,,MACs per inference: 1M (0.394%)
2654,/encoder/tp_encoders.19/feed_forward/w_1/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.19/feed_forward/w_1/Add_output_0_fc (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.19/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 2048]"
2655,/encoder/tp_encoders.19/feed_forward/activation/Relu,ElementWiseNeuron,"/encoder/tp_encoders.19/feed_forward/w_1/Add_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.19/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)",1x97x2048,A D G C,operation: 4
,,,,,,,packageName: qti.aisw
2656,/encoder/tp_encoders.19/feed_forward/w_2/MatMul_pre_reshape,Reshape,"/encoder/tp_encoders.19/feed_forward/activation/Relu_output_0 (data type: Float_32; tensor dimension: [1,97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.19/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)",97x2048,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 2048]"
2657,/encoder/tp_encoders.19/feed_forward/w_2/MatMul,FullyConnected,"/encoder/tp_encoders.19/feed_forward/w_2/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,2048]; tensor type: NATIVE)","/encoder/tp_encoders.19/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,bias_op_name: encoder.tp_encoders.19.feed_forward.w_2.bias
,,,"onnx::MatMul_10500 (data type: Float_32; tensor dimension: [512,2048]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,encoder.tp_encoders.19.feed_forward.w_2.bias (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,param count: 1M (0.453%)
,,,,,,,MACs per inference: 1M (0.394%)
2658,/encoder/tp_encoders.19/feed_forward/w_2/MatMul_post_reshape,Reshape,"/encoder/tp_encoders.19/feed_forward/w_2/Add_output_0_fc (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 512]"
2659,/encoder/tp_encoders.19/Add_1,Eltwise_Binary,"/encoder/tp_encoders.19/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_encoders.19/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,operation: 0
,,,"/encoder/tp_encoders.19/feed_forward/w_2/Add_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",,,,packageName: qti.aisw
2660,/encoder/tp_norm/LayerNormalization,LayerNorm,"/encoder/tp_encoders.19/Add_1_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/encoder/tp_norm/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)",1x97x512,A D G C,axes: [2]
,,,onnx::LayerNormalization_10501 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,epsilon: 1e-05
,,,onnx::LayerNormalization_10502 (data type: Float_32; tensor dimension: [512]; tensor type: STATIC),,,,packageName: qti.aisw
,,,,,,,MACs per inference: 248k (0.0933%)
2661,/ctc_lo/MatMul_pre_reshape,Reshape,"/encoder/tp_norm/LayerNormalization_output_0 (data type: Float_32; tensor dimension: [1,97,512]; tensor type: NATIVE)","/ctc_lo/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)",97x512,A D G C,packageName: qti.aisw
,,,,,,,"shape: [97, 512]"
2662,/ctc_lo/MatMul,FullyConnected,"/ctc_lo/MatMul_pre_reshape (data type: Float_32; tensor dimension: [97,512]; tensor type: NATIVE)","logits_fc (data type: Float_32; tensor dimension: [97,25055]; tensor type: NATIVE)",97x25055,A D G C,bias_op_name: ctc.ctc_lo.bias
,,,"onnx::MatMul_10503 (data type: Float_32; tensor dimension: [25055,512]; tensor type: STATIC)",,,,packageName: qti.aisw
,,,ctc.ctc_lo.bias (data type: Float_32; tensor dimension: [25055]; tensor type: STATIC),,,,param count: 12M (5.56%)
,,,,,,,MACs per inference: 12M (4.82%)
2663,/ctc_lo/MatMul_post_reshape,Reshape,"logits_fc (data type: Float_32; tensor dimension: [97,25055]; tensor type: NATIVE)","logits_fc.ncf (data type: Float_32; tensor dimension: [1,97,25055]; tensor type: NATIVE)",1x97x25055,A D G C,packageName: qti.aisw
,,,,,,,"shape: [1, 97, 25055]"
2664,/ctc_lo/MatMul_post_reshape_transpose,Transpose,"logits_fc.ncf (data type: Float_32; tensor dimension: [1,97,25055]; tensor type: NATIVE)","logits.nfc (data type: Float_32; tensor dimension: [1,25055,97]; tensor type: NATIVE)",1x25055x97,A D G C,packageName: qti.aisw
,,,,,,,"perm: [0, 2, 1]"
Note: The supported runtimes column assumes a processor target of Snapdragon 855
Key : A:AIP
      D:DSP
      G:GPU
      C:CPU
""
Input Name,Dimensions,Type,Encoding Info
prompt,"1,4",Int_32,No encoding info for this tensor
x,"1,93,560",Float_32,No encoding info for this tensor
Unconsumed Tensor Name,Dimensions,Type,Encoding Info
logits.nfc,"1,25055,97",Float_32,No encoding info for this tensor
Total parameters: 231350751 (882 MB assuming single precision float. This does not represent the actual memory requirement for the model. It provides a rough estimate of the contribution from the parameters 4xNo of Params in bytes)
Total MACs per inference: 266M (100%)
"Ops used by Graph: Concat, DepthWiseConv2d, ElementWiseNeuron, Eltwise_Binary, FullyConnected, Gather, LayerNorm, MatMul, Reshape, Softmax, Split, Transpose"
Est. Steady-State Memory Needed to Run: 881.5 MiB
""
Cache Info: