{"componentChunkName":"component---src-templates-aig-portal-template-tsx","path":"/tkmk9tuw1","result":{"data":{"markdownRemark":{"html":"<h2 id=\"环境安装\"><a href=\"#%E7%8E%AF%E5%A2%83%E5%AE%89%E8%A3%85\" aria-label=\"环境安装 permalink\" class=\"anchor\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>环境安装</h2>\n<p><a href=\"/ai-doc/ERNIE/Fkmg84a0z\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">环境安装与配置</a></p>\n<h2 id=\"目录结构\"><a href=\"#%E7%9B%AE%E5%BD%95%E7%BB%93%E6%9E%84\" aria-label=\"目录结构 permalink\" class=\"anchor\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>目录结构</h2>\n<p>文本分类任务位于/wenxin/tasks/sequence_labeling</p>\n<div class=\"gatsby-highlight\" data-language=\"text\"><pre class=\"language-text\"><code class=\"language-text\">├── __init__.py                                                               \n├── env.sh                                                 ## 非镜像开发套件环境变量配置脚本\n├── run_with_json.py                                       ## 只依靠json进行模型训练的入口脚本\n├── run_infer.py                                           ## 只依靠json进行模型预测的入口脚本\n├── examples                                               ## 各典型网络的json配置文件\n│   ├── seqlab_crf_ch.json\n│   ├── seqlab_crf_ch_infer.json\n│   ├── seqlab_ernie_2.0_base_crf_ch.json\n│   └── ...\n├── data`                                                   `## 示例数据文件夹，包括各任务所需训练集（train_data）、测试集（test_data）、验证集（dev_data）和预测集（predict_data）\n│   ├── train_data\n│   │   └── train.txt\n│   ├── test_data\n│   │   └── test.txt\n│   ├── dev_data\n│   │   └── dev.txt\n│   └── predict_data\n│        └── infer.txt\n├── dict`                                                   `## 示例词表文件夹\n     ``├── vocab_label_map.txt`                               `## 示例IOB标注方式的标签词表\n     ``└── vocab.txt\n└── ...</code></pre></div>\n<h2 id=\"预置reader配置\"><a href=\"#%E9%A2%84%E7%BD%AEreader%E9%85%8D%E7%BD%AE\" aria-label=\"预置reader配置 permalink\" class=\"anchor\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>预置Reader配置</h2>\n<p>通过json文件中的dataset_reader部分对预置reader进行配置，以序列标注任务seqlab_crf_ch.json为例，其dataset_reader部分如下所示：</p>\n<div class=\"gatsby-highlight\" data-language=\"text\"><pre class=\"language-text\"><code class=\"language-text\">{\n  &quot;dataset_reader&quot;: {                                  \n    &quot;train_reader&quot;: {                                   ## 训练、验证、测试各自基于不同的数据集，数据格式也可能不一样，可以在json中配置不同的reader，此处为训练集的reader。\n      &quot;name&quot;: &quot;train_reader&quot;,\n      &quot;type&quot;: &quot;BasicDataSetReader&quot;,                     ## 采用BasicDataSetReader，其封装了常见的读取tsv文件、组batch等操作。\n      &quot;fields&quot;: [                                       ## 域（field）是文心的高阶封装，对于同一个样本存在不同域的时候，不同域有单独的数据类型（文本、数值、整型、浮点型）、单独的词表(vocabulary)等，可以根据不同域进行语义表示，如文本转id等操作，field_reader是实现这些操作的类。\n        {\n          &quot;name&quot;: &quot;text_a&quot;,                             ## 序列标注的文本特征域，命名为&quot;text_a&quot;。\n          &quot;data_type&quot;: &quot;string&quot;,                        ## data_type定义域的数据类型，文本域的类型为string，整型数值为int，浮点型数值为float。\n          &quot;reader&quot;: {&quot;type&quot;:&quot;CustomTextFieldReader&quot;},   ## 采用针对文本域的通用reader &quot;CustomTextFieldReader&quot;。数值数组类型域为&quot;ScalarArrayFieldReader&quot;，数值标量类型域为&quot;ScalarFieldReader&quot;。\n          &quot;tokenizer&quot;:{\n              &quot;type&quot;:&quot;CustomTokenizer&quot;,                 ## 指定该文本域的tokenizer为CustomTokenizer。\n              &quot;split_char&quot;:&quot; &quot;,                         ## 通过空格区分不同的token。\n              &quot;unk_token&quot;:&quot;[UNK]&quot;,                      ## unk标记为&quot;[UNK]&quot;。\n              &quot;params&quot;:null\n            },\n          &quot;need_convert&quot;: true,                         ## &quot;need_convert&quot;为true说明数据格式是明文字符串，需要通过词表转换为id。\n          &quot;vocab_path&quot;: &quot;./dict/vocab.txt&quot;,             ## 指定该文本域的词表。\n          &quot;max_seq_len&quot;: 512,                           ## 设定每个域的最大长度。\n          &quot;truncation_type&quot;: 0,                         ## 选择截断策略，0为从头开始到最大长度截断，1为从头开始到max_len-1的位置截断，末尾补上最后一个id（词或字），2为保留头和尾两个位置，然后按从头开始到最大长度方式截断。\n          &quot;padding_id&quot;: 0                               ## 设定padding时对应的id值。\n        },                                              ## 如果每一个样本有多个特征域（文本类型、数值类型均可），可以仿照前面对每个域进行设置，依次增加每个域的配置即可。此时样本的域之间是以\\t分隔的。\n        {\n          &quot;name&quot;: &quot;label&quot;,                              ## 标签也是一个单独的域，命名为&quot;label&quot;。如果多个不同任务体系的标签存在于多个域中，则可实现最基本的多任务学习。\n          &quot;data_type&quot;: &quot;string&quot;,                        ## 序列标注任务中，标签是文本类型。\n          &quot;reader&quot;:{&quot;type&quot;:&quot;CustomTextFieldReader&quot;},\n          &quot;tokenizer&quot;:{\n              &quot;type&quot;:&quot;CustomTokenizer&quot;,\n              &quot;split_char&quot;:&quot; &quot;,\n              &quot;unk_token&quot;:&quot;O&quot;,\n              &quot;params&quot;:null\n          },\n          &quot;need_convert&quot;: true,\n          &quot;vocab_path&quot;: &quot;./dict/vocab_label_map.txt&quot;,   ## 配置标签的标注方式\n          &quot;max_seq_len&quot;: 512,\n          &quot;truncation_type&quot;: 0,\n          &quot;padding_id&quot;: 0\n        }\n      ],\n      &quot;config&quot;: {\n        &quot;data_path&quot;: &quot;./data/train_data/&quot;,              ## 训练数据train_reader的数据路径，写到文件夹目录。\n        &quot;shuffle&quot;: false,\n        &quot;batch_size&quot;: 8,\n        &quot;epoch&quot;: 10,\n        &quot;sampling_rate&quot;: 1.0\n      }\n    },\n    ……\n  },\n  ……\n}</code></pre></div>\n<h3 id=\"自定义reader配置\"><a href=\"#%E8%87%AA%E5%AE%9A%E4%B9%89reader%E9%85%8D%E7%BD%AE\" aria-label=\"自定义reader配置 permalink\" class=\"anchor\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>自定义Reader配置</h3>\n<p>自定义reader配置根据具体项目情况通过对base_dataset_reader基类重写来实现。变量设置规则在common.rule.InstanceName中，该部分囊括了model和data部分的全局变量，实现了数据部分与组网部分的衔接，前向传播loss与优化器反向传播loss、计算metric的loss的衔接。部分与数据相关示例如下所示：</p>\n<div class=\"gatsby-highlight\" data-language=\"text\"><pre class=\"language-text\"><code class=\"language-text\">...\n    RECORD_ID = &quot;id&quot;\n    RECORD_EMB = &quot;emb&quot;\n    SRC_IDS = &quot;src_ids&quot;\n    MASK_IDS = &quot;mask_ids&quot;\n    SEQ_LENS = &quot;seq_lens&quot;\n    SENTENCE_IDS = &quot;sent_ids&quot;\n    POS_IDS = &quot;pos_ids&quot;\n    TASK_IDS = &quot;task_ids&quot;\n...</code></pre></div>\n<h2 id=\"tokenizer配置\"><a href=\"#tokenizer%E9%85%8D%E7%BD%AE\" aria-label=\"tokenizer配置 permalink\" class=\"anchor\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Tokenizer配置</h2>\n<p>针对使用ernie预训练模型进行finetuning的任务，针对文本域默认配置了FullTokenizer对文本进行处理，一般FullTokenizer会先去除文本域的空格，并对中文进行切字处理，对英文进行subword；如果样本中含有英文，subword后会出现文本域和标签域shape对不上的问题；可以通过修改FullTokenizer为CustomTokenizer进行解决；即不对文本域做任何处理，并以split_char做切分得到最终的tokens；</p>\n<p>也可以利用如下脚本快速校验分词后的token是否与label长度匹配：</p>\n<div\n              class=\"gatsby-code-button-container\"\n              data-toaster-id=\"6128779313951949000\"\n              data-toaster-class=\"gatsby-code-button-toaster\"\n              data-toaster-text-class=\"gatsby-code-button-toaster-text\"\n              data-toaster-text=\"复制成功!\"\n              data-toaster-duration=\"2500\"\n              onClick=\"copyToClipboard(`if __name__ == &quot;__main__&quot;:\n    # text = &quot;丰 田 rav 4 荣 放 2 0 2 0 款 两 驱 多 少 钱\\tO O O O O O O O O O O O O O O O O O&quot;\n    vocab_file = &quot;./model_files/dict/vocab_ernie_2.0_base_ch.txt&quot;\n \n    tokenizer = FullTokenizer(vocab_file=vocab_file)\n    with open(&quot;train.txt&quot;, 'r') as f:\n        lines = f.readlines()\n        for index, line in enumerate(lines):\n            line = line.rstrip()\n            fileds = line.split('\\t')\n            tokens = tokenizer.tokenize(fileds[0])\n            labels = fileds[1].split(' ')\n            if len(labels) != len(tokens):\n                print(&quot;index: &quot;, index, &quot;\\t&quot;, line, &quot;\\t len(labels): &quot;, len(labels), &quot;  len(tokens): &quot;, len(tokens))\n                # break`, `6128779313951949000`)\"\n            >\n              <div\n                class=\"gatsby-code-button\"\n                data-tooltip=\"复制\"\n              >\n                <svg t=\"1618230150650\" class=\"gatsby-code-button-icon\" viewBox=\"0 0 1024 1024\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" p-id=\"4556\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" width=\"200\" height=\"200\"><defs><style type=\"text/css\"></style></defs><path d=\"M950.857143 833.828571c0 29.257143-21.942857 51.2-51.2 51.2-29.257143 0-51.2-21.942857-51.2-51.2V95.085714h-585.142857C241.371429 95.085714 219.428571 73.142857 219.428571 43.885714s21.942857-43.885714 43.885715-43.885714h650.971428c21.942857 0 36.571429 14.628571 36.571429 36.571429v797.257142zM109.714286 1024c-21.942857 0-36.571429-14.628571-36.571429-36.571429V175.542857c0-21.942857 14.628571-36.571429 36.571429-36.571428h658.285714c21.942857 0 36.571429 14.628571 36.571429 36.571428v811.885714c0 21.942857-14.628571 36.571429-36.571429 36.571429h-658.285714zM197.485714 277.942857c-14.628571 0-29.257143 14.628571-29.257143 21.942857s7.314286 21.942857 21.942858 21.942857h490.057142c14.628571 0 21.942857-7.314286 21.942858-21.942857s-7.314286-21.942857-21.942858-21.942857H197.485714z m0 138.971429c-14.628571 0-29.257143 14.628571-29.257143 21.942857s7.314286 21.942857 21.942858 21.942857h490.057142c14.628571 0 21.942857-7.314286 21.942858-21.942857s-7.314286-21.942857-21.942858-21.942857H197.485714z m0 138.971428c-14.628571 0-29.257143 14.628571-29.257143 29.257143s7.314286 21.942857 21.942858 21.942857h490.057142c14.628571 0 21.942857-7.314286 21.942858-21.942857 0-14.628571-7.314286-21.942857-21.942858-21.942857H197.485714z m0 138.971429c-14.628571 0-21.942857 7.314286-21.942857 21.942857s7.314286 21.942857 21.942857 21.942857h490.057143c14.628571 0 21.942857-7.314286 21.942857-21.942857s-14.628571-21.942857-29.257143-21.942857H197.485714z m0 146.285714c-14.628571 0-21.942857 7.314286-21.942857 21.942857s7.314286 21.942857 21.942857 21.942857h292.571429c14.628571 0 21.942857-7.314286 21.942857-21.942857s-7.314286-21.942857-21.942857-21.942857h-292.571429z\" fill=\"#FFFFFF\" p-id=\"4557\"></path></svg>\n              </div>\n            </div>\n<div class=\"gatsby-highlight\" data-language=\"python\"><pre class=\"language-python\"><code class=\"language-python\"><span class=\"token keyword\">if</span> __name__ <span class=\"token operator\">==</span> <span class=\"token string\">\"__main__\"</span><span class=\"token punctuation\">:</span>\n    <span class=\"token comment\"># text = \"丰 田 rav 4 荣 放 2 0 2 0 款 两 驱 多 少 钱\\tO O O O O O O O O O O O O O O O O O\"</span>\n    vocab_file <span class=\"token operator\">=</span> <span class=\"token string\">\"./model_files/dict/vocab_ernie_2.0_base_ch.txt\"</span>\n \n    tokenizer <span class=\"token operator\">=</span> FullTokenizer<span class=\"token punctuation\">(</span>vocab_file<span class=\"token operator\">=</span>vocab_file<span class=\"token punctuation\">)</span>\n    <span class=\"token keyword\">with</span> <span class=\"token builtin\">open</span><span class=\"token punctuation\">(</span><span class=\"token string\">\"train.txt\"</span><span class=\"token punctuation\">,</span> <span class=\"token string\">'r'</span><span class=\"token punctuation\">)</span> <span class=\"token keyword\">as</span> f<span class=\"token punctuation\">:</span>\n        lines <span class=\"token operator\">=</span> f<span class=\"token punctuation\">.</span>readlines<span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span>\n        <span class=\"token keyword\">for</span> index<span class=\"token punctuation\">,</span> line <span class=\"token keyword\">in</span> <span class=\"token builtin\">enumerate</span><span class=\"token punctuation\">(</span>lines<span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span>\n            line <span class=\"token operator\">=</span> line<span class=\"token punctuation\">.</span>rstrip<span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span>\n            fileds <span class=\"token operator\">=</span> line<span class=\"token punctuation\">.</span>split<span class=\"token punctuation\">(</span><span class=\"token string\">'\\t'</span><span class=\"token punctuation\">)</span>\n            tokens <span class=\"token operator\">=</span> tokenizer<span class=\"token punctuation\">.</span>tokenize<span class=\"token punctuation\">(</span>fileds<span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\n            labels <span class=\"token operator\">=</span> fileds<span class=\"token punctuation\">[</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">.</span>split<span class=\"token punctuation\">(</span><span class=\"token string\">' '</span><span class=\"token punctuation\">)</span>\n            <span class=\"token keyword\">if</span> <span class=\"token builtin\">len</span><span class=\"token punctuation\">(</span>labels<span class=\"token punctuation\">)</span> <span class=\"token operator\">!=</span> <span class=\"token builtin\">len</span><span class=\"token punctuation\">(</span>tokens<span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span>\n                <span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string\">\"index: \"</span><span class=\"token punctuation\">,</span> index<span class=\"token punctuation\">,</span> <span class=\"token string\">\"\\t\"</span><span class=\"token punctuation\">,</span> line<span class=\"token punctuation\">,</span> <span class=\"token string\">\"\\t len(labels): \"</span><span class=\"token punctuation\">,</span> <span class=\"token builtin\">len</span><span class=\"token punctuation\">(</span>labels<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token string\">\"  len(tokens): \"</span><span class=\"token punctuation\">,</span> <span class=\"token builtin\">len</span><span class=\"token punctuation\">(</span>tokens<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\n                <span class=\"token comment\"># break</span></code></pre></div>\n<h2 id=\"开始训练\"><a href=\"#%E5%BC%80%E5%A7%8B%E8%AE%AD%E7%BB%83\" aria-label=\"开始训练 permalink\" class=\"anchor\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>开始训练</h2>\n<ul>\n<li>如您使用镜像开发套件，您可直接进入下一步骤。如您将文心开发套件与本地已有的开发环境相结合，您需要在./env.sh中配置对应的环境变量，并执行source env.sh ，如需了解更多详情，请参考<a href=\"ERNIE/%E5%BF%AB%E9%80%9F%E5%BC%80%E5%A7%8B%E5%92%8C%E4%BD%BF%E7%94%A8/%E7%8E%AF%E5%A2%83%E5%AE%89%E8%A3%85%E4%B8%8E%E9%85%8D%E7%BD%AE.md\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">3.环境安装与配置</a>。</li>\n<li>模型训练的入口脚本为./run_with_json.py ， 通过—param_path参数来传入./examples/目录下的json配置文件。例如：<code class=\"language-text\">python run_with_json.py --param_path ./examples/seqlab_ernie_2.0_base_crf_ch.json</code></li>\n<li>训练运行的日志会自动保存在<strong>./log/test.log</strong>文件中.</li>\n<li>训练中以及结束后产生的模型文件会默认保存在<strong>./output/seqlab_ernie_2.0_base_crf_ch/</strong>目录下，其中<strong>save_inference_model/</strong>文件夹会保存用于预测的模型文件，<strong>save_checkpoint/</strong>文件夹会保存用于热启动的模型文件。</li>\n</ul>\n<h2 id=\"开始预测\"><a href=\"#%E5%BC%80%E5%A7%8B%E9%A2%84%E6%B5%8B\" aria-label=\"开始预测 permalink\" class=\"anchor\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>开始预测</h2>\n<ul>\n<li>如您使用镜像开发套件，您可直接进入下一步骤。如您将文心开发套件与本地已有的开发环境相结合，您需要在./env.sh中配置对应的环境变量，并执行source env.sh ，如需了解更多详情，请参考<a href=\"/ai-doc/ERNIE/Fkmg84a0z\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">环境安装与配置</a>。</li>\n<li>选定配置好的json文件，把你将要预测的模型对应的inference_model文件路径填入json文件的“<strong>inference_model_path</strong>”变量中。</li>\n<li>模型训练的入口脚本为./run_infer.py ， 通过—param_path参数来传入./examples/目录下的json配置文件。例如：<code class=\"language-text\">python run_infer.py --param_path ./examples/seqlab_ernie_2.0_base_crf_ch_infer.json</code></li>\n<li>预测运行的日志会自动保存在<strong>./output/predict_result.txt</strong>文件中。</li>\n</ul>","fields":{"slug":"tkmk9tuw1","title":"开始训练与预测","date":"2021-03-25"},"headings":[{"value":"环境安装","depth":2},{"value":"目录结构","depth":2},{"value":"预置Reader配置","depth":2},{"value":"自定义Reader配置","depth":3},{"value":"Tokenizer配置","depth":2},{"value":"开始训练","depth":2},{"value":"开始预测","depth":2}]}},"pageContext":{"isCreatedByStatefulCreatePages":false,"slug":"tkmk9tuw1","prev":{"id":"ukmk9ldvd","name":"准备工作","path":"ukmk9ldvd","filePath":"任务详解/序列标注任务/准备工作.md","parentIds":["gkma5ltot","Vkmjzxfyz"],"parents":[{"id":"gkma5ltot","documentId":null,"name":"任务详解","repoName":"ERNIE","filePath":"任务详解","disabled":false,"path":"gkma5ltot","lastMergeTime":null},{"id":"Vkmjzxfyz","documentId":null,"name":"序列标注任务","repoName":"ERNIE","filePath":"任务详解/序列标注任务","disabled":false,"path":"Vkmjzxfyz","lastMergeTime":null}]},"next":{"id":"okmka7epm","name":"阅读理解任务","path":"okmka7epm","filePath":"任务详解/阅读理解任务/适用场景.md","parentIds":["gkma5ltot","0kmjzxmfw"],"parents":[{"id":"gkma5ltot","documentId":null,"name":"任务详解","repoName":"ERNIE","filePath":"任务详解","disabled":false,"path":"gkma5ltot","lastMergeTime":null},{"id":"0kmjzxmfw","documentId":null,"name":"阅读理解任务","repoName":"ERNIE","filePath":"任务详解/阅读理解任务","disabled":false,"path":"0kmjzxmfw","lastMergeTime":null}]},"parents":[{"id":"gkma5ltot","documentId":null,"name":"任务详解","repoName":"ERNIE","filePath":"任务详解","disabled":false,"path":"gkma5ltot","lastMergeTime":null},{"id":"Vkmjzxfyz","documentId":null,"name":"序列标注任务","repoName":"ERNIE","filePath":"任务详解/序列标注任务","disabled":false,"path":"Vkmjzxfyz","lastMergeTime":null}]}}}