interpret が便利そう

explainable AI で便利そうなOSS。以下で、どれだけ項目が効いているか確認。

from interpret.glassbox import ExplainableBoostingClassifier

ebm = ExplainableBoostingClassifier()
ebm.fit(x_train, y_train)

ebm_local = ebm.explain_local(x_test, y_test)
sorted(
    [(k, v)for k, v in zip(ebm_local._internal_obj['specific'][37]['names'], ebm_local._internal_obj['specific'][37]['scores'])],
    key=lambda x: np.abs(x[1]),
    reverse=True
)

github.com