Pages

Monday, January 29, 2024

Organizational and Labour Implications of ChatGPT

Where are you seeing the impact of ChatGPT in your industry/sector? Are particular roles more vulnerable?

Just came across a fascinating report by Future Skills Centre - Centre des Compétences futures on organizational and labour implications of the influence of ChatGPT in Canada. 


The report says that for Canada, ChatGPT has potentially the biggest impact on 12 occupations including Post-secondary Teaching and researchers, Computer engineers, Information systems managers, Physicists and astronomers, and Psychologists!


Interestingly, all of these roles fall into either the STEM (science, technology, engineering, and math) professionals or into the knowledge workers clusters that were previously identified as having the lowest automation risk and tightest labour market. The reason why these will be greatly impacted is the requirement of either writing and programming skills required for these roles and how generative AI tools like ChatGPT have the most potential to impact roles requiring these skills.

In Canada, none of these roles employ a particularly large number of people (Combined 4.0 per cent for 12 roles). So, ChatGPT or similar tools are unlikely to cause major changes in employment levels for these roles. They do, however, have the potential to drive significant improvements in labour productivity. For example, a recent study provided marketers, grant writers, consultants, data analysts, and human resource professionals access to ChatGPT. The study results showed that the respondents worked 40 % faster, and the quality of their work increased by 18%!

The future is AI-powered and individuals and businesses must strategize to implement ChatGPT in roles that align with organizational goals and help drive efficiency and innovation.

Wednesday, December 27, 2023

Digital Pedagogy Toolbox: Integrating Digital Literacy Practices

I recently wrote for BCcampus as a part of their "Digital Pedagogy Toolbox Series". In my article, I explored how educators and learners can develop digital competencies and skills by using the B.C. Post-Secondary Digital Literacy Framework as a roadmap. The framework includes eight thematic competencies within digital literacy: 1) ethical and legal; 2) technology; 3) information literacy; 4) digital scholarship; 5) communication and collaboration; 6) creation and curation; 7) digital well-being; and 8) community-based learning.

What #DigitalLiteracy was 20 years ago is certainly not the same today. Today, as learners, educators, and digital citizens, we are not just consumers of digital content, we are also creators, curators, and contributors and along the way, we are all leaving our digital footprints behind.

#DigitalCompetencies that underline digital literacy involve a deeper understanding of how to use digital tools and technologies in various contexts, adapt to new digital environments, and critically evaluate digital information.

As Educators, integrating digital literacy into every aspect of the learning journey is not just a pedagogical choice; it’s an ethical imperative.


Digital Pedagogy Toolbox: Integrating Digital Literacy Practices

Tuesday, December 12, 2023

Activity vs. Performance Measures in Training Evaluation

Image by Gerd Altmann from Pixabay

Organizations are looking to invest in employee training and ongoing development. To make sure these learning and development investments are effective, it is important to evaluate and measure their impact. But which metrics should you be keeping track of? Which measures offer a clearer understanding of training impacts? 

Evaluating the impact of training initiatives can be done using activity measures and performance measures.

Activity measures focus on the "process of learning" and include different aspects of the learning journey. These measures help assess learner participation in various activities such as training completion time, attendance records, and allocated time for specific activities during training, etc. For example, when measuring the impact of software training, activity measures may include tracking how many users participated in the program, what percentage completed it successfully, and how much time was dedicated to working on the software during the training.

Performance measures focus on the "outcome of learning" and help assess how well learners can apply their knowledge to real-world tasks and situations. Job performance, customer satisfaction, and test scores are some examples of the types of measures that can be included in this category. For example, when measuring the impact of software training, performance measures can include the number of errors made by software users, the scores on a test of software proficiency, and customer satisfaction with the work produced by users of the software.

Activity measures are good for assessing initial engagement and training completion while performance measures provide a better picture of how well employees are applying their newly acquired skills.

However, there are challenges in using performance measures. For example, it is difficult to identify performance measures that are directly related to the training and that accurately reflect the desired outcomes. Collecting and analyzing performance data is also costly and time-consuming. Most importantly, it is challenging to isolate the impact of the training and identify if any changes to the performance were directly due to the learning or training initiative, as opposed to other factors such as changes in the work environment. While it is good to plan the evaluation of learning and training projects using performance measures, using performance measures may not be applicable or cost-effective for every kind of training.

The combination of activity measures and performance measures is usually the best option for gathering significant and valuable data. However, what makes up the appropriate combination of these measures may differ according to the particular training or learning approach and intended results.