National University Podcast Series
National University Deans, Faculty, and Leadership discuss a wide range of topics with a focus on the higher education community. Tune in to hear from our experts, alumni, students, and faculty. Current programs include: Center for the Advancement of Virtual Organizations (CAVO), Virtual Education Support Center (VESC) and Whole Person Center (WPC), formerly Virtual Center for Health and Wellness (VC4HW).
National University Podcast Series
CAVO Ep. 99: Beyond the Screen - Building Ethical Futures in Virtual Spaces
Technology connects us in ways once unimaginable, yet it also raises urgent questions about accountability and our shared humanity. In this episode, Dr. Emi Barriesi, CAVO Visiting Virtual Expert, joins Dr. Melody Rawlings, CAVO Director, to explore the future of ethics in virtual environments and the guiding values that can ensure technology becomes a bridge to opportunity rather than a barrier.
Welcome to the Center for the Advancement of Virtual Organizations podcast, Beyond the Screen, Building Ethical Futures in Virtual Spaces. I'm Melody Rawlings, Director of CABO and professor in the College of Business, Engineering and Technology at National University. Today I'm joined by Dr. Emmy Beresi. She's our 2025 quarter three CABO visiting virtual expert. Dr. Baresi is a human-centered global agile leader and brings 15 years of remote experience in technology and organizational transformation, informed by industrial organizational psychology. She has successfully coached a Fortune 100 organization through a rigorous agile transformation and most recently spearheaded the establishment of business agile practices within business intelligence for medical organization. Dr. Emmy is the developer of the Human Ethics Approach for Remote and Tech Hybrid Hearth Leadership Model. Welcome, Dr. Emmy, and thanks so much for taking the time to come and chat about this important topic.
SPEAKER_00:Hi, Melody. It's great to be here today. Thank you so much.
SPEAKER_01:Absolutely. I'm so excited to dive into our conversation. But to get us started, would you tell us a bit about yourself?
SPEAKER_00:Absolutely. So thank you for the intro as well. I can just add a few things there from a career perspective. I've been working primarily in the technology development space, mostly with methodologies called Agile. There's an umbrella of style of development that we typically have done through our time creating software, applications, data development. So lots of different kinds of technology that I've touched along the way. Also am a writer and a speaker specifically around the leadership space, remote leadership, and technology leadership. And tech hybrid, which you mentioned in the model that I developed, Hearth, that particular style of leadership transcends virtual environments. It also goes into what we've called tech hybrid, which is really related to socio-technical teams, which may be a word familiar to folks more than tech hybrid, and that means you are embedding AI or robotic or some other modern tech in your team as an actual team member. So that's really the area and the sphere I work in. And just very excited to speak about ethics in this space. That's really the heart of a lot of what I do.
SPEAKER_01:Well, you certainly bring valuable experience and insight on this topic, that's for sure. So let's get started. My first question is what do you feel determines whether technology becomes either a powerful tool or a persistent barrier? And I think most of us have experienced it both ways.
SPEAKER_00:Oh, absolutely. Um, and I would even say that the challenges kind of overshadow sometimes some of it, the usefulness or the powerfulness when we're designing how we use our technology as tools. And so some of the things that I was really thinking about and the determining factors, um, intention and design ethics, um, access and equity, and then integration into workflows is really kind of the way I would word that. And what I really mean by that is that you know, tool, the tool of technology is powerful when you use it purposefully. So just to use it, to use it, um, kind of it may become a nuisance or or even a barrier when it is not with purpose. So if you are designing how you're using technology to serve the ultimate goal or outcome, then that becomes a really powerful tool. So to whether you're supporting your process or you're supporting how your organization is designed, how you connect people, all of those things with the added element of respecting the human dignity at the core of your team, I think makes it super potent. And accessibility as well. I we think about virtual teams or remote teams or teams integrating technology, there actually are a lot of considerations depending on where you are geographically. So if we don't have equitable access, or if your version looks different than my version, whatever the technology is, or I can't use it as well as my teammates or colleagues can, that really kind of dissolves some of the use that that tool could really have. And then how well is it integrated? And are people really able to use it really well? Um one of the examples I kind of thought of as I was really thinking through this was even something simple like Teams or Slack. Um, is it really integrated into how you work? Is there a common and shared understanding of how you're supposed to use it as you as your workflow goes through its normal process? And is there adoption with it because people understand how to use it? So I think those are some really big factors there. I think when it becomes an optic obstacle, people stop using it. Um that's something you can see in data, but even experientially, people don't want to use the technology that's not working well. So that intentionality really helps, and then making sure there's that accessibility and equity.
SPEAKER_01:Absolutely. And everything you touched on there, I wish we could break down. I wish we had time to take a dive into all of those because we could make a podcast on any one of them. Um is so true. And we yeah, we do hear people, some people argue that technology makes life easier and more efficient. And when it works great, it does. But we hear others say, and we've experienced it again, complicates things and just adds unnecessary stress. Um, my husband will tell me a lot of times he gets really frustrated with passwords. So that's something we've all encountered at times. And while we understand that changing our password is is needed and and required for security, uh, it can be so frustrating uh when we can't remember it or uh it didn't get saved or or whatever. And then I don't know about you, but I'm not real trusting of the, you know, we see those apps come out that help us save passwords.
SPEAKER_00:Yes.
SPEAKER_01:I'm not real trusting of that, that's for sure.
SPEAKER_00:Absolutely. I definitely feel that way. And it reminds me, I I just threw away a padlock, uh, one of those combination locks that you used to put on your lockers when you were in high school. And uh the sticker was gone from the back. And I just thought, well, this is useless now, and I tossed it. And uh that's really kind of what that reminds me of. You know, if you're it's it's a frustration of I cannot use whatever this thing is, right? If if it's not uh created in a way where I can access that. Um, and I don't trust a lot of those systems and tools. I go and check their privacy policies and who they use and platforms and all of that.
SPEAKER_01:Yeah, exactly. Exactly. Because you feel like, well, if if wherever I'm using a password, that could get hacked, but so could an app that stores my passwords. 100%. Exactly. So that's I'm broken down to really being old fashioned, and I have my little black book. So it's I've got a notebook too, yes. It's written down. You know, some people would say, well, that's not good because if somebody gets a hold of your little black book, they're gonna have access to your life. So I mean, I don't think there's any, there's definitely not one size that fits all, that's for sure. So I want to I want to move into the the idea of integrity now. And I know that's um all of this is in of course in your so in your wheelhouse. So you know, my question is.
SPEAKER_00:Yeah, no, that's okay. So I I would say yes, it is more difficult to demonstrate it in the virtual environment. And not that it is more difficult to be a person of integrity, but it is more difficult to demonstrate it. And I think there are trust cues that can be lost when you aren't in a physical environment, things like tone and body language or in you know, kind of the informal things that human-to-human communication and connection really can just ignite in a in an in-person environment. Some of that can be lost, especially if you don't use camera on in your virtual environment, or you don't have a natural way to kind of pick up on some of those other cues. I think that's what can diminish some of how you demonstrate trust in there. And it can be filtered through that that technology that you're using, which really can even obscure transparency. The leadership model I designed two or one of the elements is a combination of two things, trust and transparency. And I very intentionally married those two concepts because in the virtual environment, there has to be this added layer of intentionality when you are developing trust, because there are some very key differences that technology just mediates between relationships and communication, uh, whether it's visibility or the ability to sit and read those cues or any of the other really kind of human elements and human aspects of us, that really can create some difficulty there. So it's not harder to have integrity online, but it is easier to hide the lack of integrity if it exists there. And I think that's the cautionary kind of part I would put there for leaders. I mean, an individual can have a lack of integrity in person and they can have a lack of integrity virtually as well. And it's so much easier to hide a lack of integrity in a virtual environment. So kind of figuring out some really good strategies for leaders or for team members or anyone in the organization, I think is a is a very intentional process of how you reduce some of those filters that technologies can put in there, which for a person without integrity naturally would be used to their advantage to kind of hide that lack of integrity, if you will.
SPEAKER_01:Absolutely. And and as you were uh sharing that, something that came to my mind was employers who do not want to allow their employees to work remotely, whether it's in a hybrid format or all remotely, because it's a trust factor, they for you know they and I guess it's just human nature. When we see something in front of us or see someone in front of us, we're more likely to trust that person. So when they someone's coming to the office, then they're more they think they're more likely doing their job. When when in reality that's really not true, they could go to their office and surfing the web or be on Facebook or whatever social media. Um think of that. But it is, it is, there's no doubt that it is, um, as you pointed out, easier to hide uh lack of integrity in a virtual environment. There's that there's no doubt about that. And that segues really well into my next question. And you kind of touched on these already, but but what are some examples, like big examples of integrity challenges and then what and potential solutions?
SPEAKER_00:Um absolutely. So I think the the biggest integrity challenge that I have personally experienced and even written some about is is the transparency element. And so I'll I I'll call it selective transparency uh for leaders and for anyone in an organization. So being very secretive can be a lack of transparency there. And I think because it's easier to hide in a virtual environment when you're not being transparent, um, I think that to me is really the biggest challenge. Um, committing to being intentionally transparent, uh, whether that's publishing, consistent updates, um having documentation. I often tell my teams, documentation is a form of communication. We are a virtual team and we must have this as a form of communication. And it's also a love letter to your future team, which you may be a part of, right? Tomorrow's team likely will include you. And that's a love letter to that part of your team that allows that transparency, that creates a culture where that integrity across the team members has a shared understanding. So I think that's a one of the biggest ones I've personally experienced as well as seen in a lot of research. Um, the other one I was really thinking about when I'm talking about AI and including it in a team or robotics is invisible bias in algorithms. AI is often sold as this bias-free product, but we don't think about the creators or the ones who are putting in the information that it's feeding and generating data off of as having bias. And in the end of the day, a lot of organizations building AI tools are marketing themselves to say there's no bias. Stop hiring humans is one of the billboards I've seen out there. Um, you know, and there's a a marketing ad that I saw, you know, coaching, um, come get our coaching AI tool and eliminate the bias from your human coach. And those kinds of things make me really sad, actually, because at the end of the day, the tool does have bias. And I think that's a challenge that as we're utilizing it in remote environments, it's going to be extra difficult because of the lack of connection factor that might be already built in a team. If, you know, so the solution A, you know, having those intentional connection points, but also making sure that your teams are educated on what bias is present. There are some very simple questions that you can even ask to discover that there are biases in your tools. Um, a lot of them have some, you know, political, you know, motivated, um, politically motivated backgrounds to them. So I do share those sometimes in speaker settings to be a little cheeky, to be a little forward and say, just ask this one question and you'll find out that your tool has bias. Um and it may rub people the wrong way, the response that they get, because there's bias embedded in the response. And so um we have to remember that there is bias in our algorithmic design as well. And I think that's something that's a challenge right now for virtual teams, especially because they're going to be challenged and they're continuing to be challenged to use tools to be more productive. Um, and I think the last thing that I really think of in terms of integrity is just the erosion of boundaries that teaming can bring when you're in a virtual environment in a different way. So in the office you leave at a certain time. And I think there are some leaders that when you're virtual can erode that set, you know, the custom uh hours norms, right? Uh, nine to five, or or even global teams. Um, so really the solution, and I'm in that's kind of an environment, I think, just making sure there are clear norms around boundaries, whether you document them or they're unspoken, they should be at least uh have a conversation so you can set norms there. Um, things like synchronous and asynchronous communication, when do you use it for what? They seem maybe like overallkill conversations for leaders sometimes, but the virtual environment really requires us to give some airspace, really, right, to the norms that we're setting so that there aren't really unrealistic expectations of availability or some erosion of how we're going to work with each other that's different from your in-office environment.
SPEAKER_01:Oh my goodness, you raised so many things I wish we could unpack uh from the bias. Uh, absolutely, AI is biased I've experienced that as well, and uh people are beginning to recognize that more and more, I believe. And and I think it's fine, I think it's we should. I think we are it's incumbent upon us when we do encounter that in AI to correct AI. Yes, if it's if it's being biased, and so there's that that aspect, and then also I I thought of my students. Uh students oftentimes will talk when we have when they're working on their dissertation, they'll say something about uh they'll need to address the role of the researcher, and we're all biased, right? We're all biased in some way, and so we can't eliminate it, but we absolutely should strive to mitigate it. And uh, and it becomes, like you said, it can becomes or it becomes important to to begin to recognize it because when we recognize that we are, and just like journaling or or just having these thoughtful convers or our thoughts with ourselves, these moments to really reflect on our position or where we are, which I don't think we live such busy lives. I don't think we spend much time, most of us at least spend a lot of time really reflecting on our position on things, you know, things that we say or just how we think about something and and think and seeing things from someone else's perspective. I don't think we spend a lot of time doing that. And um, and so I I would like to pivot from now from we've talked about integrity, now let's talk about the ethics. Uh so uh people might would say, you know, if if technology can consistently raises ethical problems, then you know, should we use it at all? Um if it's if there's going to be these issues and this just something more that we have to think about. So what are, and I and I don't agree with that, I don't think that I think we should continue to use it, but I think we have to use it responsibly. So, what are some of the most pressing ethical challenges that you see emerging from technology right now? And of course, AI instantly comes to mind.
SPEAKER_00:Absolutely. Um, and and AI bias, we really talked about it there, but also this concept of explainability. I think there needs to be a certain level of technical literacy that not just our leaders but our entire workforce needs to start having with AI to really understand how those decisions are being made with it, what is it being used for, how is it being designed. And I think a lot of people don't understand the nuances or even the different types of AI and the way it's being used behind the scenes. So I think to me, that's a big ethical dilemma and challenge. If if we're using AI for things like surveillance or for monitoring or all kinds of other things, um, there needs to be that added layer of transparency there to an appropriate level. There may be some things that it's being used for that are confidential, that are proprietary, whatever it may be. Those things are definitely understood. But for your day-to-day operations and your day-to-day issues that we're facing in a work environment, um, decision making specifically is what I'm thinking about. Leaders are using it to aid their decision making, but there's a lot of discussion on using it in replacement of human decision making, even. So I think that is a really big ethical dilemma because it it kind of eliminates the human accountability factor. And they're not someone who's got to answer to that, but at the end of the day, also there should be a human in the loop for that. So that to me is one data as well, data exploitation, which can really go hand in hand with AI, but even things like personalized data within an organization, how you store it, how you used it, um, informed consent, right? Privacy policies, when you read through them, uh Walmart's one I I love to give an example of. If you go to their website, when you say yes to that privacy policy, oh my gosh, the things you are saying yes to is quite remarkable. Um, you're basically saying yes. Use the information and sell it to your your partners, however you feel like. I mean, it's it's quite uh amazing. I I share that often in in different settings to just get people to take a look. It's a big retailer, everybody knows the name. Take a look at their privacy policy and look what you're giving away when you say yes.
SPEAKER_01:And we just hit agree, like and think nothing about it.
SPEAKER_00:Exactly. Exactly. Um, and so I started going through all my apps and reading all the privacy policies.
SPEAKER_01:That's yeah, that's that's uh terrifying.
SPEAKER_00:It's it it can be, right? And so some of the things that I didn't recognize that I was saying yes to, um, I didn't realize my camera had been, I'd said yes to my camera being on on some of these. So that was a really big one. And they monetize this data, right? Our information gets monetized time and time and time again. And you might have even gotten a letter in the mail saying your information was part of a data breach, right? So I have um and just yesterday I had I got some um someone trying to to create cryptocurrency accounts in my name. So um so some system failed somewhere, and and I think the other thing that that really this made me think of is what I very dramatically call the visual collapse of reality. Um and and that is you know real related to AI too, but I mean things that the colloquial are known as like deep fakes or misinformation, right? But at the I saw this this picture on LinkedIn that that was four different images of a cake, and it was a multicolored cake with frosting on a plate with a fork. And the poster said which one of these is real, and I narrowed it down to two out of four, but I could not tell which of those two was the real cake. I could not tell. And so RAI is getting better and better, and people are putting out you know, videos, images that are that are actually not real, and and and what is it being used for, and why is it being done that way? There's some great use cases, maybe marketing, you want it to look really real, and you want it to be quick and not have to actually go and cut the cake and slice it and dress it and do food photography, fantastic. Great use case for marketing, but what else are we using that for? And and how can we discern the difference between you know your quote unquote deep fake or your visually collapsed reality from something that is real? Um, I think that's a huge ethical dilemma that really, really we need to get a little bit of a wrangle on in the work environments.
SPEAKER_01:That's for sure. So we live in a world of deception, um, and it's and it's becoming more and more that way all the time. And it is so difficult to distinguish uh real from fake. And I think it's just gonna get harder as AI becomes more and more and more advanced. So oh goodness. So what do you feel is the key to balancing rapid innovation with accountability needed to uh avoid that we need to avoid misuse or harm?
SPEAKER_00:So I think innovation without guardrails is just speed, right? So innovation can be revision, it can be creation of something new, right? Disruptive, radical. It's a it's a wide spectrum, but it can also just be a small update to things. So if we're not having brakes on our progress, you know, you know, it has to have brakes. So innovation is a vehicle, just like um, you know, when you could get in a card, it has to have brakes and a steering wheel and all of that. And I think that's really the key to innovation is not just innovating without purpose. Um, one of the things in the agile space that I often hear is move fast and break things, right? And uh while that's fine whether there's not really a cost associated or much risk associated with it, I think in a lot of environments there really is a risk associated and a cost associated with that risk if things go wrong. And so um, there do need to be guardrails, there needs to be purpose behind it. And again, that word I love, which is transparency and and then the accountability. Who's accountable for the innovation? What's the purpose of it? Why are we doing it? And how are we being transparent outside of our team or with our stakeholders or customers or whoever the you know the relevant key stakeholders are in your environment? How do we ensure that we're being open and explaining what we're doing? Because all innovation needs a guardrail, and that guardrail or that gate may be very wide. You might have a huge playing field to go through what you're you're trying to do. But at the end of the day, it needs to be some gate on the outside so that you've got uh shared expectations around where you don't go outside of and what you don't do.
SPEAKER_01:Absolutely, and that's that human factor that you mentioned. Um, and these things need to be uh in place, but they also need to be planned to be in place. Um, so you know, and my last question, just if you would share some very practical uh all this has been so practical, but just kind of if you would kind of sum up three guiding values that you would recommend uh for new technologies um being developed and implemented. If you could share those three, what would they be?
SPEAKER_00:Absolutely. Um, so the the three that I think I use and that I would advocate for others, uh, number one is human dignity. So designing your technology and your processes for how you use it, what you use in your environments in a way that promotes and preserves personhood. Um, I think technology is so humanized these days that sometimes we even have to explicitly define what personhood is in our team environments, especially if you've got a technology team member. Okay, what's the difference between me and this or this robot or this AI tool, right? Um so preserving that personhood, which is a shared inner experience that all humans have, preserving that, I think, in your design and in your operations. Um, transparency, which I've mentioned ad nauseum, but I will never stop bringing the transparency bell.
SPEAKER_01:Nor should you.
SPEAKER_00:It's you know, and I think you know, users should understand how and why tech behaves the way it does. I think we also should um understand how and why we've chosen things to implement. Um, there are organizations out there with, you know, you have to have a camera on. And in lieu of being in the office, we're gonna keep you with a camera on all day long and monitor you. So, what is the purpose of that? What's the transparency there? Have having that, like I said earlier, human dignity part, but explaining it. And then a leader may realize okay, there is not a good purpose behind this. So transparency can also reveal some of the things you're doing that may not be super ethical. Um, and three, the last one really is accountability. So clear and forcible ownership of outcomes when you are designing, when you are implementing, and when you're using tech. Um, there needs to be a human who is at the end of the day going to be able to answer to intended and more importantly, the unintended consequences of what we're doing in the technology space. Um, I think making sure that we know that, those three elements to me are non-negotiables when you are building tech, when you are implementing it, integrating it anywhere, um, you know, preserving humanity, being transparent, and then making sure there is a human at the helm at the final decision point for your accountability to me is is critical.
SPEAKER_01:I love that human dignity, transparency, and accountability. Uh that's so well said, and I think it's too much. We go we go on and like a just a bull in a China shop. We just go on and we don't we don't pay attention like we should. We don't plan, we don't consider these three guardrails, I think as you call them. And uh that gets us into a lot of trouble. So we spend some time really thinking thinking things through, right? And considering uh just all the aspects of each of each one of these, uh, we would save us much, much uh time. It would save us in so many ways. And so I really uh really appreciate you sharing those. So, Dr. Emmy, thank you for sharing such great insights on ethics in virtual environments and how leaders can use technology as a bridge to opportunity and not a barrier.
SPEAKER_00:Absolutely. I I appreciate this opportunity. This is something I'm very passionate about for sure. So thank you.
SPEAKER_01:Well, that is very obvious, very clear, and it's also very clear that you have so much uh to share in this space, so much knowledge and expertise in this space. And it's been such a pleasure talking with you. And thank you for joining us and sharing your expertise in support of the Center for the Advancement of Virtual Organizations. Your insights truly have been invaluable, and we're confident our listeners will gain much from the information that you've shared. Thank you.