Human-centered AI for Improving Humanity in Digital Learning

AI is envisioned to be evolved into human-centered AI (HAI), which refers to approaching AI from a human perspective by considering human conditions and contexts. In this seminar, Prof Stephen Yang from Taiwan National Central University would address his research on the use of HAI to evaluate new designs of technology that could be leveraged to advance AI research, education, policy, and practice to improve humanity digital learning.

Most current discussions on AI technology focus on how AI can enable human performance. However, Prof Yang’s research explored that AI could also inhibit the human condition and advocate for an in-depth dialog between technology- and human-based research to improve the understanding of HAI from various perspectives.

(length: 57:27)

Human-centered AI for Improving Humanity in Digital Learning

Presenter: Prof Stephen Yang

Introduction

00:00 – 04:36

Historical trends of AI

04:37 – 05:22

Machine learning

Machine Learning

05:23 – 06:51

SVM (Support Vector Machine)

Decision tree

Random forest

Ensemble learning

Logistic regression

Bayes theory (Naïve Bayes)

KNN (K- Nearest Neighbors)

Neutral network

Deep Learning

06:52 – 07:28

Deep learning for machine perception

07:29 – 08:24

Deep learning models

08:25 – 09:15

What is BERT?

09:16 – 10:08

BERT: pre-training and fine-tuning

10:09 – 11:06

BERT and recent improvements over it

11:07 – 12:30

What is GPT-3?

12:31 – 14:20

What can transformers do?

14:21 – 14:42

Bias in natural language results in bias transformers

14:43 – 16:31

Examples of bias in natural language

From Cool Technology to Warm Humanity

16:32 – 22:21

AI considering the humanity

Considering Humanity with Human-centered AI

22:22 – 31:55

AI under human control & concerning the human condition

AI around Learning Analytics

31:56 – 43:41

Technology & humanity

Reflection of Learning

43:42 – 44:08

Unlearn & relearn

44:09 – 44:27

Seeing invisible through the visible

Closing

44:28 – 45:16

Remarks

Questions and Comments

45:17 – 47:43

Do you have any suggestions for reducing the biases like gender bias and racial bias?

47:44 – 49:46

Regarding the automatic essay marking using AI, how can AI catch the coherence, like ideas, in a written essay? How does it work? Can it replace the human work?

49:47 – 51:42

Human feelings vs machine emotions

51:43 – 52:31

Can AI analyze arguments in argumentative writing?

52:32 – 55:10

Regarding automatic grading, how is your AI model different from some commercial AI tools that are available, like Grammarly and Criteria?

55:11 – 55:36

Is your tool available for grading our students’ work?

55:37 – 57:27

Future direction

Leave a Reply

Your email address will not be published. Required fields are marked *