- 发行说明
- 要求
- 安装
- 入门指南
- 项目
- 数据集
- ML 包
- 管道
- ML 技能
- ML 日志
- AI Fabric 中的 Document Understanding
- 基本故障排除指南
AI Center
构建 ML 包
数据科学家使用 Python 或 AutoML 平台构建预训练的模型。 这些模型由 RPA 开发者在工作流中使用。
包必须符合几项要求。这些要求分为提供模型所需的组件和训练模型所需的组件。
- 根文件夹中包含 main.py 文件的文件夹。
- 在此文件中,提供一个名为 Main 的类,该类至少实现两个函数:
_init_(self)
:不接受任何参数,并加载您的模型和/或模型的本地数据(例如单词嵌入)。predict(self, input)
:要在模型服务时调用的函数。
- 名为 requirements.txt 的文件,其中包含运行模型所需的依赖项。
predict
函数用作模型的端点。
除了推理之外,还可以选择使用包来训练机器学习模型。为此,您可以提供以下内容:
- 在与 main.py 文件相同的根文件夹中,提供一个名为 train.py 的文件。
- 在此文件中,提供一个名为 Main 的类,该类至少实现四个函数。除
_init_
外,下面所有函数都是可选的,但会限制可与相应包一起运行的管道类型。-
_init_(self)
:不接受任何参数,并加载您的模型和/或模型的数据(例如单词嵌入)。 -
train(self, training_directory)
:接受包含任意结构化数据的目录作为输入,运行训练模型所需的所有代码。每当执行训练管道时,都会调用此函数。 -
evaluate(self, evaluation_directory)
:接受包含任意结构化数据的目录作为输入,运行评估模型所需的所有代码,并为该评估返回单个分数。每当执行评估管道时,都会调用此函数。 -
save(self)
:不接受任何参数。每次调用train
函数后,都会调用此函数以保留您的模型。 -
process_data(self, input_directory)
:接受包含任意结构化数据的input_directory
输入。只有在执行完整管道时,才会调用此函数。在执行完整管道时,此函数可以执行任意数据转换,并且可以拆分数据。具体而言,保存到环境变量training_data_directory
指向的路径的任何数据是train
函数的输入,而保存到环境变量evaluation_data_directory
指向的路径的任何数据是上述evaluation
函数的输入。
-
为了使 AI Fabric 在 RPA 工作流中更易于使用,可以将包表示为具有三种输入类型之一:字符串、文件和文件列表(可在上传包时设置)。
json
作为包的输入类型。
predict
函数中完成。以下是在 Python 中反序列化数据的一些示例:
<h1>Robot sends raw string to ML Skill Activity
# E.g. skill_input='a customer complaint'`
def predict(self, skill_input):
example = skill_input # No extra processing
# Robot sends json formatted string to ML Skill Activity
# E.g skill_input='{'email': a customer complaint', 'date': 'mm:dd:yy'}'
def predict(self, skill_input):
import json
example = json.loads(skill_input)
# Robot sends json formatted string with number array to ML Skill Activity
# E.g. skill_input='[10, 15, 20]'
def predict(self, skill_input):
import json
import numpy as np
example = np.array(json.loads(skill_input))
# Robot sends json formmatted pandas dataframe
# E.g. skill_input='{"row 1":{"col 1":"a","col 2":"b"},
# "row 2":{"col 1":"c","col 2":"d"}}'
def predict(self, skill_input):
import pandas as pd
example = pd.read_json(skill_input)</h1>
<h1>Robot sends raw string to ML Skill Activity
# E.g. skill_input='a customer complaint'`
def predict(self, skill_input):
example = skill_input # No extra processing
# Robot sends json formatted string to ML Skill Activity
# E.g skill_input='{'email': a customer complaint', 'date': 'mm:dd:yy'}'
def predict(self, skill_input):
import json
example = json.loads(skill_input)
# Robot sends json formatted string with number array to ML Skill Activity
# E.g. skill_input='[10, 15, 20]'
def predict(self, skill_input):
import json
import numpy as np
example = np.array(json.loads(skill_input))
# Robot sends json formmatted pandas dataframe
# E.g. skill_input='{"row 1":{"col 1":"a","col 2":"b"},
# "row 2":{"col 1":"c","col 2":"d"}}'
def predict(self, skill_input):
import pandas as pd
example = pd.read_json(skill_input)</h1>
predict
函数。因此,RPA 开发者可以传递文件路径,而不必在工作流本身中读取和序列化文件。
predict
函数。数据的反序列化也可以在 predict
函数中完成,一般情况是直接将字节读取到类似于文件的对象中,如下所示:
<h1>ML Package has been uploaded with <em>file</em> as input type. The ML Skill Activity
# expects a file path. Any file type can be passed as input and it will be serialized.
def predict(self, skill_input):
import io
file_like = io.BytesIO(skill_input)</h1>
<h1>ML Package has been uploaded with <em>file</em> as input type. The ML Skill Activity
# expects a file path. Any file type can be passed as input and it will be serialized.
def predict(self, skill_input):
import io
file_like = io.BytesIO(skill_input)</h1>
如上所示读取序列化字节等效于打开已启用“读取二进制”标志的文件。要在本地测试模型,请以二进制文件形式读取文件。以下是读取图像文件并在本地对其进行测试的示例:
<h1>main.py where model input is an image
class Main(object):
...
def predict(self, skill_input):
import io
from PIL import Image
image = Image.open(io.BytesIO(skill_input))
...
if__name__ == '<strong>main</strong>':
# Test the ML Package locally
with open('./image-to-test-locally.png', 'rb') as input_file:
file_bytes = input_file.read()
m = Main()
print(m.predict(file bytes))</h1>
<h1>main.py where model input is an image
class Main(object):
...
def predict(self, skill_input):
import io
from PIL import Image
image = Image.open(io.BytesIO(skill_input))
...
if__name__ == '<strong>main</strong>':
# Test the ML Package locally
with open('./image-to-test-locally.png', 'rb') as input_file:
file_bytes = input_file.read()
m = Main()
print(m.predict(file bytes))</h1>
csv
文件并在 predict
函数中使用 pandas 数据框的示例:
<h1>main.py where model input is a csv file
class Main(object):
...
def predict(self, skill_input):
import pandas as pd
data frame = pd.read_csv(io.BytesIO(skill_input))
...
if name == '<strong>main</strong>':
# Test the ML Package locally
with open('./csv—to—test—locally.csv', 'rb') as input_file:
bytes = input_file.read()
m = Main()
print(m.predict(bytes))</h1>
<h1>main.py where model input is a csv file
class Main(object):
...
def predict(self, skill_input):
import pandas as pd
data frame = pd.read_csv(io.BytesIO(skill_input))
...
if name == '<strong>main</strong>':
# Test the ML Package locally
with open('./csv—to—test—locally.csv', 'rb') as input_file:
bytes = input_file.read()
m = Main()
print(m.predict(bytes))</h1>
predict
函数。
可以向技能发送文件列表。在工作流中,活动的输入是包含文件路径(用逗号分隔)的字符串。
predict
函数的输入是一个字节列表,其中列表中的每个元素是文件的字节字符串。
train.py
中,任何已执行的管道都可以保留任意数据,称为管道输出。从环境变量工件写入到目录路径的任何数据都将保留,并且可以通过导航到“管道详细信息”页面,在任何时候查看这些数据。通常,任何类型的图表以及训练/评估作业的统计信息都可以保存在 artifacts
目录中,并且可以在管道运行结束时从用户界面访问。
<h1>train.py where some historical plot are saved in ./artifacts directory during Full Pipeline execution
# Full pipeline (using process_data) will automatically split data.csv in 2/3 train.csv (which will be in the directory passed to the train function) and 1/3 test.csv
import pandas as pd
from sklearn.model_selection import train_test_split
class Main(object):
...
def process_data(self, data_directory):
d = pd.read_csv(os.path.join(data_directory, 'data.csv'))
d = self.clean_data(d)
d_train, d_test = train_test_split(d, test_size=0.33, random_state=42)
d_train.to_csv(os.path.join(data_directory , 'training', 'train.csv'), index=False)
d_test.to_csv (os.path.join(data__directory , 'test' , 'test.csv'), index=False)
self.save_artifacts(d_train, 'train_hist.png', os.environ["artifacts"])
self.save_artifacts(d_test, 'test_hist.png', os.environ["artifacts"])
...
def save_artifacts(self, data, file_name, artifact_directory):
plot = data.hist()
fig = plot[0][0].get_figure()
fig.savefig(os.path.join(artifact_directory, file_name))
...</h1>
<h1>train.py where some historical plot are saved in ./artifacts directory during Full Pipeline execution
# Full pipeline (using process_data) will automatically split data.csv in 2/3 train.csv (which will be in the directory passed to the train function) and 1/3 test.csv
import pandas as pd
from sklearn.model_selection import train_test_split
class Main(object):
...
def process_data(self, data_directory):
d = pd.read_csv(os.path.join(data_directory, 'data.csv'))
d = self.clean_data(d)
d_train, d_test = train_test_split(d, test_size=0.33, random_state=42)
d_train.to_csv(os.path.join(data_directory , 'training', 'train.csv'), index=False)
d_test.to_csv (os.path.join(data__directory , 'test' , 'test.csv'), index=False)
self.save_artifacts(d_train, 'train_hist.png', os.environ["artifacts"])
self.save_artifacts(d_test, 'test_hist.png', os.environ["artifacts"])
...
def save_artifacts(self, data, file_name, artifact_directory):
plot = data.hist()
fig = plot[0][0].get_figure()
fig.savefig(os.path.join(artifact_directory, file_name))
...</h1>
在模型开发期间,TensorFlow 图必须加载到用于提供服务的同一个线程上。为此,必须使用默认图。
以下示例进行了必要修改:
import tensorflow as tf
class Main(object):
def <strong>init</strong>(self):
self.graph = tf.get_default_graph() # Add this line
...
def predict(self, skill_input):
with self.graph.as_default():
...
import tensorflow as tf
class Main(object):
def <strong>init</strong>(self):
self.graph = tf.get_default_graph() # Add this line
...
def predict(self, skill_input):
with self.graph.as_default():
...
如果在创建技能时启用了 GPU,它将与 NVIDIA GPU 驱动 418、CUDA Toolkit 10.0 和 CUDA 深度神经网络库 (cuDNN) 7.6.5 运行时库一起部署在映像中。
IrisClassifier.sav
。
-
初始项目树(不含 main.py 和 requirements.txt):
IrisClassifier/ - IrisClassifier.sav
IrisClassifier/ - IrisClassifier.sav -
要添加到根文件夹的示例 main.py:
from sklearn.externals import joblib import json class Main(object): def <strong>init</strong>(self): self.model = joblib.load('IrisClassifier.sav') def predict(self, X): X = json.loads(X) result = self.model.predict_proba(X) return json.dumps(result.tolist())
from sklearn.externals import joblib import json class Main(object): def <strong>init</strong>(self): self.model = joblib.load('IrisClassifier.sav') def predict(self, X): X = json.loads(X) result = self.model.predict_proba(X) return json.dumps(result.tolist()) -
添加 requirements.txt:
scikit-learn==0.19.0
scikit-learn==0.19.0
itsdangerous<2.1.0
Jinja2<3.0.5
Werkzeug<2.1.0
click<8.0.0
itsdangerous<2.1.0
Jinja2<3.0.5
Werkzeug<2.1.0
click<8.0.0
要对此进行测试,您可以在全新环境中使用以下命令,并确保所有库均已正确安装:
pip install -r requirements.txt -c constraints.txt
pip install -r requirements.txt -c constraints.txt
4. 最终文件夹结构:
IrisClassifier/
- IrisClassifier.sav
- main.py
- requirements.txt
IrisClassifier/
- IrisClassifier.sav
- main.py
- requirements.txt
在此示例中,业务问题需要重新训练模型。基于上述包进行构建时,您可能需要满足以下条件:
-
初始项目树(仅提供服务的包):
IrisClassifier/ - IrisClassifier.sav - main.py - requirements.txt
IrisClassifier/ - IrisClassifier.sav - main.py - requirements.txt -
要添加到根文件夹的示例 train.py:
import pandas as pd import joblib class Main(object): def <strong>init</strong>(self): self.model_path = './IrisClassifier.sav' self.model = joblib.load(self.model_path) def train(self, training_directory): (X,y) = self.load_data(os.path.join(training_directory, 'train.csv')) self.model.fit(X,y) def evaluate(self, evaluation_directory): (X,y) = self.load_data(os.path.join(evaluation_directory, 'evaluate.csv')) return self.model.score(X,y) def save(self): joblib.dump(self.model, self.model_path) def load_data(self, path): # The last column in csv file is the target column for prediction. df = pd.read_csv(path) X = df.iloc[:, :-1].get_values() y = df.iloc[:, 'y'].get_values() return X,y
import pandas as pd import joblib class Main(object): def <strong>init</strong>(self): self.model_path = './IrisClassifier.sav' self.model = joblib.load(self.model_path) def train(self, training_directory): (X,y) = self.load_data(os.path.join(training_directory, 'train.csv')) self.model.fit(X,y) def evaluate(self, evaluation_directory): (X,y) = self.load_data(os.path.join(evaluation_directory, 'evaluate.csv')) return self.model.score(X,y) def save(self): joblib.dump(self.model, self.model_path) def load_data(self, path): # The last column in csv file is the target column for prediction. df = pd.read_csv(path) X = df.iloc[:, :-1].get_values() y = df.iloc[:, 'y'].get_values() return X,y -
如果需要,编辑 requirements.txt:
pandas==1.0.1 scikit-learn==0.19.0
pandas==1.0.1 scikit-learn==0.19.0 -
最终文件夹(包)结构:
IrisClassifier/ - IrisClassifier.sav - main.py - requirements.txt - train.py
IrisClassifier/ - IrisClassifier.sav - main.py - requirements.txt - train.py注意:现在可以首先提供此模型,随着新的数据点通过机器人或人机回圈进入系统中,便可利用 train.py 来创建训练管道和评估管道。