Ai

Hitching a Ride on AI

As humans, we’re not too keen to be knocked off the top rung of the proverbial intelligence ladder – that all-powerful position we collectively hold which allows us to make the rules!

But as we look down upon our underlings, there is a growing sense of uncertainty that a species of Artificial Intelligence (AI) is gaining momentum, and in due course will launch an attack for the primary position. And if this happens, what will become of us, and will we cope with being second-in-charge?

Of course i’m being a little dramatic in sketching a future scenario, but let’s not underestimate the speed at which Artificial Intelligence is moving towards Artificial General Intelligence (AGI). Just recently the Allan Institute announced one of its robots had completed and passed a Grade 8 science exam set for human students, scoring a remarkable 90%. This only took 4 years to get right, and with exponential growth in computing power, quantum computing advancement and AI capabilities, we would be naive to think highly capable AI is decades away.

But we don’t need to panic, we need to be clever. Thinking outside the box, creativity and innovation are core strengths of humans – we need to see AI differently. Not as something to fight against but to augment our own intelligence with.

AI can also stand for Augmented Intelligence

There are three key reasons why we need to augment our own intellectual capability:

  • The speed of DNA (biological) evolution is too slow. Very little has changed in our DNA for the last 10 000 years. We can’t rely on natural evolution to retain our top spot.
  • Human progress comes from learning, but the volume and speed of new material creation far exceeds our collective ability to consume, digest, review, examine, hypothesis and research. We need help!
  • AI and machine learning has the capability to learn significantly faster than humans and is increasing this capability exponentially.

It’s critical that we ensure AI works for the benefit of human intellect improvement, not just human progress. We are already seeing significant advancements in Brain/Machine interfaces or Neuro-technology. There are some great examples in areas such as Neuro-prosthetics and Neuro-stimulation. These are blurring the lines between biological and technological improvements and is giving us a glimpse of our co-existence.

I’m not necessarily promoting implantable devices, I personally don’t fancy that, but there is exciting research taking place around non-invasive augmentation as well as neuro-plasticity that is showing how our brains can be adapted and used more productively intervention. I think this is an area of research that holds a lot of promise. If you are interested in some amazing stories around brain plasticity, I recommend you read “The Brain that can Change Itself” by Norman Doidge. We humans are nothing short of amazing!

I’m looking forward to keeping our spot on top of the ladder, I’m even more exciting that we have the opportunity to extend the ladder upwards and carry on climbing as a species, using our AI resources to achieve it.

Advertisement

Don’t trust AI? How to allay fears and build trust in AI tools

Originally written for Inside HR

AI is changing the way businesses operate, however, effective design and consumption frameworks need to be created and implemented in order to allay fears and build trust in AI tools.

You don’t have to venture too far into the realm of HR technology to discover the rapid growth of AI tools silently influencing decision making, providing newly discovered insights and simply making things easier to do. Whether this is analysing candidate facial expressions or voice tones during an interview, examining individual network and collaboration habits for leadership potential, monitoring employee fatigue signals or spotting who is likely to exit your company, AI is changing the way we operate.

But do we trust AI outcomes? And I mean the collective ‘we’, both HR professionals administering these tools as well as applicants and employees who are the AI subjects.

In July 2019, I ran a snap poll via LinkedIn to test general perceptions. Let me at the outset declare this would likely not pass academic requirements of a well-designed and administered survey, but LinkedIn is a global business-focused platform and the results provide a reasonable reflection of this cohort.

The initial question on trusting AI shows 61 per cent of respondents have partial to serious concerns trusting AI outcomes. A further 25 per cent of respondents registered complete or strong disagreement, and there was nobody who completely trusted AI outcomes.

Of course, one could argue that trust is like pregnancy, you can’t be half-pregnant, so if you don’t fully trust something, you essentially distrust it. We should, however, look past this binary perspective and understand that respondents are expressing their uncertainty towards something that is largely an unknown entity. People are concerned, but not necessarily against it.

As an HR or business professional, when 3 out of 5 people are not supportive of AI, it’s not something you should likely ignore. From an HR perspective, you could be losing good candidates and alienating top talent. There is plenty of newsworthy evidence about AI bias, AI decision failure and even fake AI results to warrant concern.

One of the fundamental criticisms of AI outcomes is the inability to explain the answer.

In the same poll, we asked respondents if they would trust AI outcomes more if the reasoning was visible. A whopping 92 per cent agreed or strongly agreed. While this is good news, the reality for most HR professionals is they will be unable to do this. Most HR tools using AI are commercial off-the-shelf products, producing commoditised AI answers. If it’s a true AI tool, it needs lots of data, more than you probably have of your own. The algorithms sit in a ‘black box’ and even if you could access the code, understanding how the answer is reached is complex.

“One could argue that trust is like pregnancy, you can’t be half-pregnant, so if you don’t fully trust something, you essentially distrust it”

This is why the third question in the survey – having an AI code of ethics, is so important. Close to 50 per cent of respondents scored at top scale on this question. There is a significant amount of good work evolving in this space. Many governments, technology giants and private companies are discussing and developing important principles. Some of the key focus areas include concepts such as ‘transparent AI’ and ‘white-box’ development which will increase credibility by allowing answers to be explained. Other areas are independent algorithm auditing, validated unbiased training data and developers using open-source methods and code.

AI will become a powerful solution to many of our business problems. But while it’s in its infancy, we need to build effective design and consumption frameworks in order to allay fear and build trust in these tools.

Key takeaways: HR and AI

  • Three out of five people don’t trust AI outcomes. As HR professionals we need to look for ways to address applicants and employee concerns.
  • Most AI tools used in the HR space are commercial off-the-shelf products. They may use some of your data, but the results are based on other data that you know little about.
  • If you are using AI tools in HR, ensure you declare this to users and find ways to explain how the tools got to an answer
  • In the future, applicants your suppliers and government agencies will ask you to show them your AI code of ethics. If you use AI, you should start working on this.

How complexity & simplicity come together in the art of future HR design

written for insideHR August 2019

Simplexity is an emerging theory that proposes a possible complementary relationship between complexity and simplicity, and it has important ramifications for HR professionals looking to improve both the mechanics and dynamics of the workplace, writes Rob Scott

Many organisations are moving towards a digitised work environment. And while there are many facets to this transformation agenda, the one overriding message from many human capital thought-leaders around the world is the need for increased simplicity. Reducing complexity in HR processes and activities is seen as an elixir for Josh Bersin’s overwhelmed employee who is suffering from low engagement and negative trending productivity levels.

But does the adoption of a simplicity mantra just mean problem-solving and innovating by making things more logical and easier? That would be nice, but it’s a little more complex than that, it’s what we call simplexity, a term which describes a complementary relationship between complexity and simplicity.

Firstly, why do we have complexity in our HR processes? Well, we don’t typically aim to build complex outcomes, but over time we make modifications, often in a reactionary way to ensure continuity, to align with new technology, include a process owner’s ‘great ideas’ or to rectify ‘minor’ problems.

In many respects we don’t notice the ‘complexity accumulation’, just as we don’t realise our own weight gain until we’re confronted with a Facebook ‘Memories’ notification of our slimmer-self three years earlier.

“Trying to resolve processes which have evolved into complex problems is likely to result in a confusing mess”

Over time organisations spend a lot of effort and money trying to patch and rectify problems we can’t really solve. But at least the problem temporarily disappears right? This may last for a while, but eventually we reach an infliction point, where we move beyond a point of ‘functional complexity’, in other words a level of complexity which is still acceptable, but not optimal. We all know what the ‘chaos zone’ feels like and we often react with statements like “How on earth did we land up like this?”.

Simplexity graph

When we attempt to resolve problems within the ‘chaos zone’, often using simple logic and keeping other inputs or outputs constant, we end up with a confusing mess. Ownership, involvement and role clarity in understanding the problem becomes blurred. Re-imagining is often the best way forward in these cases. Painful, but gets you back in the right zone.

What we really mean by simplicity is the end-user experience, not the back-end design. It’s a dichotomous situation, which is why we refer to it as “simplexity”.

It’s a reality that if we want our organisations and people to adapt, grow, be agile and leverage new technologies such as AI, automation and Blockchain, then complexity by definition will increase. However, if we want efficiency and improved people productivity, then complexity from an experience perspective must decrease.

2 steps to simplexity
So, what do you need to do to manage this contradiction?

“Organisations spend a lot of effort and money trying to patch and rectify problems we can’t really solve”

Firstly, accept that effective simplexity is a function of our understanding, not our personal desire to solve a problem or introduce something new. This means we should engage the right skills who recognise the subtleties and nature of the complexity and who can unpack the problem in ways which allow others to give appropriate input and direction. Including the right design-skills can ensure you build the bridge between complex creations and simple experiences.

Secondly, ensure you don’t land up in the ‘chaos zone’. Make sure you constantly evolve within the ‘functional complexity zone’ and purposelessly block any silent creep into the chaos zone. Actions such as process effectiveness alerts, engagement results and continuous improvement cultures can serve as ‘chaos zone’ mitigation solutions.

Bottom line – simplicity is an experience, not necessarily the design.

Simplexity in a nutshell

  • While we all want process simplicity, it’s a reflection of the output or experience rather than the back-end design.
  • Simplexity is a dichotomous term because it simultaneously requires the adoption of more complex tools such as AI in order to progress, but at the same time needs the end user experience to seem simple.
  • Trying to resolve processes which have evolved into complex problems is likely to result in a confusing mess.
  • Achieving simplexity is a function of our understanding. We need to step back from what we don’t know and introduce the appropriate skills.

Time to turn the Ulrich Model into a Digital Delivery Model

Written by Rob Scott for Inside HR

The Ulrich model of HR delivery has been the cornerstone framework of HR for the past 20 years, but in light of the newly emerging digital world, modern HR must adapt to become agile and remain effective, says Rob Scott

There is no denying that all of us are on a digital transformation journey. Our work environments and operating models are feeling the strain of being caught between more traditional business operating models and the newer, agile demands of techno-digital environments. Deciding whether to toss out the old approach or focus on a more evolutionary adaptation of your existing ways can be a daunting decision to make for HR leaders.

The Ulrich model of HR delivery, developed by Professor David Ulrich 20 years ago, has been a solid guiding framework in full or part for most HR functions globally. And even though the model has been contested over the years, the building blocks of the model; HR Shared Service Centres (SSC) for administration, Centres of Excellence (CoE’s) for content expertise and the HR Business Partner (HRBP) for business alignment, have worked – so why change something that ‘ain’t broke’?

The underlying design principle of the Ulrich model has been about effective and streamlined connectivity between the elements of HR and business operations and strategy. It was built on assumptions that were pre-digital age. But the digital work environment has introduced new technologies such as Robotic Process Automation, Cognitive computing, Artificial Intelligence (AI), new thinking styles such as Design Thinking, Evidence-based decisions supported by deep-dive Data Analytics as well as a deluge of demographic, ethics and loyalty impacts. As HR professionals, the worse thing we can do is bury our heads in the sand and fall prey to the Normalcy Bias, believing things will always function the way things normally function. We need to consider how a digital environment is changing the way the workforce is empowered, interacts and connects.

“The Ulrich model as a framework is still a relevant HR operating model, but the transition from the old roles to the new ones is an important adjustment required to support digital work environments”

In a Digital world, HR must respond and adapt quickly to changes which impact your business, whether that be through external competitiveness or internal innovation. This will require the roles of the HRBP, SSC and CoE to transform into ‘early warning’ detectors and predictors which can seamlessly morph into problem-solving guru’s and inform the creation of relevant and unique HR solutions. How should these roles change?

HR Business Partner » Alignment Agent

Modern HR technology, digital and automation tools fully empower line managers to be effective in hiring, managing and developing their staff. It’s time to get beyond playing the quasi-admin role for line managers. The Alignment Agent is externally focussed around your organisation’s supply chain and customers, ensuring HR solutions are adding customer-focussed value in line with business strategies and advising line managers and executives on required changes. The new Alignment Agent is seeking out business issues from a people perspective and doing problem-solving with data analytics.

Shared Service Centre » Analytics Engine Room   

As Automation and Robotic Processing takes over administrative tasks and AI replaces more complex HR admin tasks, the admin centre becomes obsolete but is reborn as an Analytics Engine Room that supports business problem solving and provides predictive capability to business leaders. Their outcomes inform future HR solutions. The future SSC employee is a data scientist or analyst. The engine room is not HR centric only, but can be part of a broader analytics entity or could be an outsourced service.

Centre of Excellence » HR Solution Provider

The new CoE will still require deep-skilled and experienced HR practitioners who will remain the thought leaders for appropriate people practices. They will be responsible for developing and deploying solutions which are identified by the new Alignment Agent and use data-driven outcomes from the Analytics Engine Room to validate their solutions. Solutions are not always standardised and can be focussed on providing the best solution for a part of the business.

The Ulrich model as a framework is still a relevant HR operating model, but the transition from the old roles to the new ones is an important adjustment required to support digital work environments.  It requires forward thinking executives and HR leaders to recognize the different demands of a future workforce and workplace, and an acknowledgement that technology, applied in the right way, is empowering employees and workplaces to be super-agile, and achieve significantly more. HR must change.

Some takeaway messages

  • The classic Ulrich model of HR has been the cornerstone of HR delivery for most organisations. It’s a good model, but it needs to be aligned to the emerging digital work environment
  • Much of what HR Business Partners and HR Shared Services Centres do is administrative in nature. The available HR software, automation and AI tools now available will completely change how these mundane activities are done. The Ulrich-defined roles must adapt
  • The old HR Business Partner role needs to drop the line manager ‘hand-holding’ style of management – Modern HR tools make line managers completely self-sufficient
  • Shared Services as we know it will disappear as administrative tasks are automated or managed by AI. A major skill refocus is needed to change these entities into Analytic Engine Rooms

ARTIFICIAL INTELLIGENCE: ARE HR PROFESSIONALS AT RISK?

Latest article published in InsideHR

would HR professionals be as enthusiastic about HR technologies if they contained Artificial Intelligence (AI) capability

Are we ready to be pushed down the proverbial pecking order of importance by sophisticated AI technology? asks Rob Scott

Very few HR and talent professionals would refute the value that technology has brought to their operations. HR functions have leveraged these tools to become efficient, effective, collaborative, engaging and more accurate. But would HR professionals be as enthusiastic about HR technologies if they contained Artificial Intelligence (AI) capability that could predict more accurately and make better business decisions than the highly educated, people-focused HR practitioner?

At what point does software that is able to pick the best applicant, predict who is most likely to resign or identify the best mentor for a talented employee, become a legitimate replacement for a highly paid HR practitioner?

Most HR professionals I engage with don’t believe this will transpire, citing the complexities of human behaviour, personal choice and the absence of universal logic in managing people in the workplace. In the short term I agree with them, but not for the same reasons they mention. In fact, when I look at how most HR functions rely on standard processes to manage certain events, I have no doubt that near-future HR technology will do a better job than humans in executing these rule-based processes. Our flawed minds can never achieve the same level of efficiency.

“AI in HR is maturing; we are seeing interesting algorithm designs, predictive analytics and automation solutions coming to market”

This is not to say that our current HR technologies are anywhere close to being artificially intelligent. Right now there is a lot of hype-spinning by software vendors about the predictive prowess of their tools, but in reality these are immature tools. We should, however, be under no illusion that sophisticated AI for HR is heading our way. As it becomes more credible and capable, it will displace employees who are focused on maintaining standardised HR processes and mundane transactional work. There is, however, a far deeper and fundamental reason why I believe AI will, in the short term, find a home as a digital assistant rather than as a replacement for HR professionals. It goes to the heart of a human emotion – fear. Having artificially intelligent machines making sophisticated and important people-based decisions feels threatening and generates a level of anxiety about our status as human beings. We are not ready to lose our “superiority” to machines, no matter how intelligent they become.

As an example, Microsoft recently released a small tool which guessed one’s age based on a picture you uploaded. The results were mostly wrong, however, the tool went viral. Why? The reasons lie in the notion that while the technology is inaccurate, we feel less threatened by it and are able to maintain our dignity and humanness.

This is a powerful lesson and opportunity for HR software developers. Building AI software that is too accurate and human-like is likely to be rejected or underutilised, not because its outcomes are incorrect, but because it pushes human beings down the proverbial pecking order of importance and insinuates that the work they are doing is demeaning and unnecessary.

“Building AI software that is too accurate and human-like is likely to be rejected or underutilised”

Of course, we shouldn’t forget that technology enhancements have been at the heart of mankind’s industrial revolutions and progress. New machines with capabilities that outshine human ability have typically been met with resistance from those affected, at least until new work opportunities borne from the new technology become evident. AI in HR is maturing; we are seeing interesting algorithm designs, predictive analytics and automation solutions coming to market, but future job clarity in a digital and AI age is still blurry. Until then, AI tools for HR will develop into great digital assistants under control of HR professionals. At least for now the role of the HR professional remains in demand. 

5 key takeways for HR 

  • AI is a growing phenomenon in HR. We are increasingly seeing the inclusion of decision algorithms, predictive analytics and automation tools in HR software.
  • Basic AI tools will have the ability to manage standard HR processes with little to no human intervention, ultimately displacing employees from these mundane roles.
  • Complex AI tools which can make human-like decisions are likely to be rejected in HR because of the implied threat to our status.
  • Whilst it seems far-fetched, HR professionals should start thinking about how to “manage” and integrate artificially intelligent machines in the work environment.
  • Digital HR assistants are already with us managing workflows, finding information and managing large amounts of data. We don’t need to fear AI.

Image source: iStock

%d bloggers like this: