Ethical Implications of AI in the Legal Practice

In this article, Justin Chia, editor of the UCalgary Tech Law Association Blog, explores the ethical implications of the use of AI in the legal practice.

Artificial Intelligence (AI) is beginning to play an integral role in many aspects of everyday life, including the law. The emergence of AI tools, such as ChatGPT, in the legal profession has raised ethical concerns regarding its appropriate role in the delivery of legal services. As the role of AI in the legal practice grows, so too does the importance of ensuring that it is used in a transparent manner that is consistent with a lawyer’s ethical obligations to their clients.

Canadian law firms and lawyers have begun to adopt AI tools in legal practice for its potential to increase the efficiency of legal service delivery and reduce costs for clients.[1] Lawyers have utilized AI tools to perform a variety of tasks, including legal research, discovery and contract review.[2] AI tools are also capable of assessing the viability of litigation or conducting contract review, though the extent to which AI is used in legal practice is not fully known.[3]

There are currently no ethical or legal requirements for lawyers to use AI in their practice. The Law Society of Alberta’s Code of Conduct does, however, include commentary on the level of technological competence that a lawyer must possess and the need to understand the risks and benefits associated with the use of technology in relation to the duty of confidentiality.[4] Advancements in AI and its potential benefits for client service may force lawyers to adopt these tools in their practice and take stock of their ethical implications.

AI has the potential to enable lawyers to deliver high quality legal services to clients in a cost-efficient and timely manner.[5] AI allows lawyers to perform tasks, such as legal research, in a more efficient manner. AI would reduce the number of billable hours charged by lawyers, which in turn reduces legal fees for clients.[6] AI can also provide lawyers with insights and analysis that they otherwise would not have identified.[7] As the adoption of AI tools in the legal profession becomes more widespread, lawyers and law societies must take increased measures to safeguard ethical obligations.

The Code of Conduct describes a competent lawyer as one who can apply the relevant, skills, knowledge and attributes in a manner that is appropriate to the client’s matters and the nature of the retainer.[8] A key component of the duty of competence is technological competence. A technologically competent lawyer must be capable of using technology relevant to the matter and understanding the risks and benefits of the technology in relation to the duty of confidentiality.[9] These requirements would presumably apply to a lawyer’s use of AI in legal practice.

A recent case of a lawyer in the United States using AI during a civil proceeding illustrates the ethical concerns regarding the use of AI in legal practice. The lawyer had used ChatGPT – an artificial intelligence chatbot capable of producing information in response to human prompts – to conduct legal research and prepare a memo.[10] The memo, which was included in court submissions, contained non-existent cases generated by ChatGPT.[11] The lawyer relied on ChatGPT to conduct these tasks, despite being unaware of its potential inaccuracy.[12]

In the Canadian context, a lawyer’s use of AI without first understanding the risks associated with its use would presumably violate the rule on competence. The lawyer in this instance was unaware of the potential for ChatGPT to produce false information, meaning he failed to understand the risks associated with the technology. Additionally, the lawyer should have informed their client that ChatGPT would be used to make submissions to the court. It also would have been necessary to obtain the client’s consent since the memo was generated entirely by ChatGPT and seemingly not revised by the lawyer. A lawyer exercising better judgement would have verified the report for inaccuracies and substituted their own expertise where necessary. This lawyer’s conduct would have constituted an unauthorized practice of law under section 106 of the Legal Profession Act, which prohibits any person, or in this case anything, who is not an active member of the Law Society from providing legal services to the public.[13]

Alberta courts have recognized the importance of safeguarding a lawyer’s ethical obligations to their client in relation to the use of AI. The Chief Justices of each of the three Alberta courts have urged lawyers to use AI responsibly and verify the accuracy of AI-generated materials before they are submitted to the court.[14] Courts in Manitoba and the Yukon have issued directives requiring lawyers to disclose to the court how and to what extent AI was used in preparing court submissions.[15] Despite the steps taken by Canadian courts to reconcile a lawyer’s use of AI with their ethical obligations, law societies have yet to follow suit.


[1] Teresa Scassa, “AI and Legal Ethics” in Florian Martin-Bariteau and Teresa Scassa Artificial Intelligence and the Law in Canada (LexisNexis Canada, 2021) at 6.

[2] Nicole Yamane “Artificial Intelligence in the Legal Field and the Indispensable Human Element Legal Ethics Demands” (2020) 33:877 Georgetown Journal of Legal Ethics at 878.

[3] Supra note 2 at 7.

[4] Law Society of Alberta, “Code of Conduct” (last modified 23 February 2023), online: <https://documents.lawsociety.ab.ca/wp-content/uploads/2017/01/14211909/Code.pdf> at 8-9.

[5] Supra note 3 at 882.

[6] Ibid.

[7] Ibid at 885.

[8] Supra note 5 at 7.

[9] Ibid at 8-9

[10] Kathryn Armstrong “ChatGPT: US lawyer admits using AI for case research”, BBC (27 May 2023), online: <https://www.bbc.com/news/world-us-canada-65735769>.

[11] Ibid.

[12] Ibid.

[13] Legal Profession Act, RSA 2000, c L-8, s. 106.

[14] Paige Parsons, “Alberta courts issue warning about the use of artificial intelligence in courtrooms”, CBC (12 October 2023), online: <https://www.cbc.ca/news/canada/edmonton/alberta-courts-warn-lawyers-about-ai-use-in-courtroom-1.6994204>.

[15] Ibid.

Panel: A Career in Technology and IP Law

The UCalgary Technology Law Association, in collaboration with the IP law club, hosted a panel featuring 4 fantastic lawyers who work in the areas of technology and IP law. Our panelists engaged in a fascinating and thorough discussion of their career paths, the implications of AI for the legal practice and the different ways that law students can pursue a career in tech and/or IP law. The UCalgary Technology law Association would like to thank our sponsor, Osler, for sponsoring this event. We would also like to thank the IP law club and our wonderful panelists for making this event possible.

Panelists

John Sanche is a Partner at Burnett, Duckworth & Palmer and a registered trademark agent. John’s wide-ranging practice includes IP, Technology and start-ups and early-stage growth companies. John’s practice involves working extensively with start-ups and early-stage companies on licensing and technology agreements as well as patent and trademark applications to facilitate the purchase and sale of tech and IP. John has also worked extensively with carbon capture technology. John’s background in computer science and programming sparked his interest in technology law, early on in his career. John obtained his JD from the University of Saskatchewan and was drawn to the emerging tech scene in Calgary.

Chelsea Nimmo is a Senior Associate at Norton Rose Fulbright in the intellectual property group. Chelsea runs a broad litigation practice in which she advises clients across a variety of industries, including pharmaceutical and life science sectors, on IP-related issues. Chelsea’s extensive work in patent litigation has enabled her to work with unique and innovative technologies, including pharmaceutical tech and medical devices. Chelsea obtained her JD from McGill and worked in Toronto prior to practicing in Calgary. Apart from her chemistry background, Chelsea took an interest in the litigation aspect of IP and patent law.

Chris Peng is an associate at Gowling WLG and a registered patent agent. Chris’ practice involves drafting and prosecuting patents in relation to a wide range of technologies, including electrical, software and mechanical engineering. Chris works on patent applications from across the world, each of which involve different types of technology. Prior to becoming a patent agent and adopting an IP-focused practice, Chris practices as a corporate securities lawyer and sat on the Alberta Securities Commission. Chris’ interests in technology and background in engineering steered him towards patent law after having worked in securities law.

Julian Dobre founded his own IP and Technology law practice  after having worked at a technology law firm in Calgary. Julian’s practice involves extensive work with social media influencers on IP-related issues. Julian is a recent graduate from the University of Calgary, Faculty of law, where he developed his interest in tech and IP law. Interestingly, Julian expressed his aspirations of working in tech and IP law in his law school application. As a sole practitioner, Julian enjoys working and growing with his clients in navigating legal issues that arise from social media.

Implications of AI for the legal practice

Each of the panelists stressed the importance of an optimistic but cautious approach to the adoption of AI tools in the legal practice. AI legal tools have actually been used by lawyers and law firms for many years prior to the emergence of Chat-GPT. The panelists noted that certain AI tools may be helpful in assisting the lawyer on tasks, such as legal research and discovery. However, the utility of AI tools varies substantially across practice areas. AI programs designed to process patent applications, for instance, are prone to mistakes and overlook the essential human element to this area of the law. Our panelists also touched on some of the fascinating problems posed by the use of AI, including ownership of AI-generated content, AI hallucination and data privacy. The panelists noted that AI has had a minimal effect on their respective practices but recognized the potential for AI to play a greater role in the future. Each of our panelists stressed the importance of lawyers not relying too heavily on AI tools in their practice and instead using these tools to supplement their knowledge and expertise.

Technology and IP Law Career Pathways

Lastly, our panelists gave the audience some useful and insightful tips on how to pursue a career in Tech and/or IP law. Though most of the panelists had a undergraduate background related to tech and an interest in technology prior to law school, they each began their careers with general practice areas. As a result, they gained exposure to different areas of the law and adopted valuable skills that enabled them to develop into well-rounded lawyers. As they progressed in their careers, they eventually adopted a more tech and IP-focused practice. Our panelists stressed the importance of for law students and junior associates of approaching their legal careers with an open mind and being willing to experience different areas of the law before pursuing a tech and/or IP-focused practice.

2023/2024 Sponsor: Osler, Hoskin and Harcourt LLP

The UCalgary Technology Law Association would like to thank Osler for being our sponsor for the second year running. Osler is one of the leading law firms in the tech and emerging and high growth sectors in Calgary and North America. We are extremely grateful for Osler’s ongoing support and generosity.

Will AI trigger a start-up investment boom?  

Justin Chia, the editor of the UCalgary Technology Law Association Blog, discusses the start-up landscape for Artificial Intelligence and the importance of legal counsel in this emerging area.

The emergence of artificial intelligence (AI) across many sectors of society has been a source of both excitement and concern, depending on who you ask. From a business standpoint, AI is thought to jumpstart the rise of start-ups developing innovative and creative ideas centering around this exciting technology and in turn generating upside in profits. So far, this prospect has not quite materialized to the extent anticipated. The multi-billion-dollar investments in AI are largely confined to a small bubble of start-ups in Silicon valley.[1] However, prominent Venture capitalists seem more than willing to invest in AI start-ups, meaning a boom in AI start-up investment in the near future is certainly not out of the question.[2] There are several potential barriers to a potential boom in AI start-up investments, but this discussion will be limited to 2 key sources: 1) greedy tech giants and 2) high costs of entry.

One of the more misguided propositions in the start-up space is that competition triggers innovation, which in turn leads to profits. This is partly true, but the majority of profits generated by competition and technological innovation, such as AI, go directly to the multi-national, multi-billion-dollar companies that have the resources to exploit this technology for profit. Tech giants, such as Google, Apple, Microsoft and Amazon have gotten significantly richer with advancements in AI and the subsequent hype amongst investors.[3] Efforts by growth companies to crack into the AI space and rake in some of these profits only boosts profits for these tech giants who use innovation as a means of creating barriers to entry for start-ups. This is of course not limited to AI but is instead a product of the corporate system. Established companies view new innovations as opportunities to generate profit, which leaves little room for start-ups to join the party.

A closely related explanation is the high costs for start-ups in accessing and developing AI technology and uncertainty amongst investors regarding the viability of AI start-ups. The high costs of developing and maintaining AI are significant barriers to growth for start-ups, especially when investments are flowing through at a consistent rate. Add to this the myriad of legal fees, including the costs associated with incorporation, filing a patent application, financing agreements and employment.

Lawyers play an important role in ensuring both the immediate and long-term success of AI start-ups. Critically, lawyers must help AI start-ups choose and implement a legal structure that reflects present needs and is capable of adapting to future circumstances.[4] Lawyers will also be key in ensuring that a start-up complies with the relevant legislation, as the last thing that a start-up needs is unnecessary litigation and the costs associated with it.

Another key area that a lawyer can be of use, especially in the context of AI, is ensuring that the start-ups’ intellectual property is adequately protected. Lawyers will help start-ups choose the type of IP that is most suitable for their invention or idea. In most cases involving technological innovations, a lawyer will assist start-ups in preparing and filing applications for patents, which grant inventors exclusive rights to their invention.[5]

In addition to protecting IP, lawyers will also help AI start-ups raise capital by ensuring that the minute book and other relevant documents are prepared in compliance with the law. Lastly, and maybe most importantly, lawyers will advise AI start-ups as to the appropriate exit transaction/strategy. A majority of tech and AI start-ups with sufficient funding tend to be acquired, as opposed to going public through an IPO, but a lawyer will be in the best position to determine which option is most viable for the company given the circumstances.[6]

Suffice to say, the AI start-up landscape remains uncertain, though promising in some respects. One thing that is for certain, however, is that lawyers will play an integral role in help AI start-ups to navigate this landscape and hopefully thrive amidst the challenges.


[1] “Silicon Valley Startups lean into AI boom” (9 September 2023), online: Axios <https://www.axios.com/2023/09/09/startups-ai-venture-capital> .

[2] “The big risk behind the AI investment boom” (23 October 2023), online: Axios https://www.axios.com/2023/10/23/venture-capital-ai-risk-investment.

[3] “AI gave tech giants a $2.4 trillion boost to their market caps in 2023”(17 October 2023), online: CNBC <https://www.cnbc.com/2023/10/17/amid-ai-buzz-big-us-tech-giants-add-2point5-trillion-in-market-cap.html>.

[4] Bryce C Tingle, Start-up and Growth Companies in Canada: A guide to Legal and Business Practice, 3rd ed(Toronto: LexisNexis, 2018) at 4.

[5] “Tips for Startups – Intellectual Property and its Value to Your Company” (August 2016), Online: McMillan LLP <https://mcmillan.ca/insights/tips-for-startups-intellectual-property-and-its-value-to-your-company/>.

[6] Ibid at 16.

The Legal Implications of Snapchats, BeReals and Other Kinds of Self-destructing Messages  

Jasleen Dhanoa, a 3L law student at the University of Calgary and writing contributor for the Tech and Law Association discusses the legal implications of ephemeral messaging and the possible ability to be found guilty of spoliation.

In Canada, parties to a lawsuit are required to disclose documents that are relevant and within their control and possession.[1] A party can be found guilty of spoliation when they intentionally destroy a document, they had the duty to preserve.[2] Doing so can lead to adverse sanctions on the violator. Spoliation was defined in the Rules of Court well before many of today’s forms of social media were created. The idea that evidence could be automatically destroyed without any positive action of any human connected to a lawsuit was simply not contemplated. So what happens when people use social media messaging intentionally designed to self-destruct? Can this amount to spoliation? Can it really be said that someone is guilty of spoliation if they themselves did not destroy the document in question? These questions have yet to be answered in Canadian courts.  

As the Chief Growth Officer at Reveal Brainspace puts it, “Once squarely the domain of James Bond movies, the concept of self-destructing messages has gone from spy trope to mainstream.”[3] Also called ephemeral messaging, self-destructing messaging is commonly used in social media platforms such as Snapchat, BeReal, Wickr, even Instagram has adopted a new “vanish mode.” These messages auto-delete after the recipient has viewed them or after a specified period. In assessing whether such messages can trigger a potential spoliation claim, its useful to look at what spoliation entails. The three elements required in order to find someone guilty of spoliation are that they had the duty to preserve, destroyed what they had the duty to preserve, and did so intentionally.

A good example to assess spoliation claims is through Snapchat messages, so-called snaps. There is no argument that messages from Snapchat can be requested by the courts. This was done recently in Araya v Nevsun Resources Ltd.[4]There is not much case law specifically about spoliation in regard to snaps themselves. However there has been litigation in the United States surrounding snaps that have been saved in-app in the “memories” folder on Snapchat. Doe v Purdue University is a case where a user deleted Snapchat “memories.”[5] In this case this didn’t amount to spoliation as there was no evidence that the deleted “memories” were relevant to the case or deleted with the intent to impact the outcome arrived to in court.[6] However the court did mention that there would be a duty to preserve if the “memories” were relevant to the litigation.[7] This commentary from the courts does suggest that the duty to preserve is definitely triggered when it comes to ephemeral messages. However, there’s definitely further issues this duty to preserve would create. How should parties preserve it in the first place? After all, the duty to preserve is limited to only those documents that in the control and possession of the litigant. Screenshots are an option however they can be unreliable. Perhaps third-party apps used to save snaps are an option, but such apps are unofficial and potentially unreliable. Hoping to recover snaps forensically may be cost-prohibitive and burdensome so that is not an option that can be relied on. Perhaps one option is to abide by Sedona Principle 4 which recommends parties decide on a discovery plan together for electronically stored information.[8] But what if there was no plan and no snaps had been preserved? Does this amount to spoliation?

Intentional destruction that will amount to spoliation is when a party does so with the purpose of altering the outcome of the litigation.[9] So what does this mean for self-destructing messages? Keeping in mind that that snaps are set up by the Snapchat to be self-deleting and not deleted by the user themselves, can this amount to intentional destruction? An argument could be made that they were deleted in the ordinary course of business and thus should not be subject to spoliation laws. However, this argument would likely fail as the common law duty states that one should preserve information as soon as litigation is commenced.[10] Arguably by continuing to send and receive snaps, one is intentionally choosing not to preserve documents.  In fact, there is some support for the idea that litigants should stop using ephemeral messaging apps to discuss litigation relevant topics as soon as it’s anticipated.[11] One of the rationales supporting this is that an opposing litigant may seek an inference that the user of ephemeral messaging is trying to hide something based on their continued use of the service.[12] Having made out the duty to preserve and assuming the snaps are relevant to the litigation, the intentional destruction element can likely be easily made out. If a user knows how Snapchat works and still is utilizing it to discuss litigation matters knowing that those snaps will be deleted and cannot be found for litigation, I argue that this amounts to intentional destruction. It should be immaterial that the users themselves are not deleting the messages. The “intentional” component should arguably be fulfilled through the choice that the users make in using such services. Even if not in bad faith, a judge may still find it to be spoliation as the user is using a technology that thwarts litigation and discovery obligations by its very nature.[13]

Ultimately, the many issues that arise with ephemeral messaging haven’t been tried in Canadian courts. It is yet to be seen how a court may deal with spoliation claims for ephemeral messaging platforms. However, given the risk of adverse findings, and several arguments in favour of a finding of spoliation, it is probably best to avoid using ephemeral messaging altogether when dealing with litigation-centred conversations.


[1] Alberta Rules of Court, Alta Reg 124/2010 at Rule 5.6.

[2] Gideon Christian PhD, “A ‘Century’ Overdue – Revisiting the Doctrine of Spoliation in the Age of Electronic Documents” (2022) 59:4 Forthcoming Alberta L Rev at 1.

[3]  Cat Casey, “This Message Will Self-Destruct in Five Seconds: eDiscovery and Ephemeral Messaging” (16 June 2022), online: Reveal Brainspace <https://resource.revealdata.com/en/blog/ediscovery-and-ephemeral-messaging&gt;.

[4] Araya v Nevsun Resources Ltd, 2019 BCSC 262. See also “Sedona Canada Commentary on Discovery of Social Media” (2022) 23 Forthcoming Sedona Conf  J at 94-95. 

[5] Doe v Purdue 2021 WL 2767405, 2021 U.S. Dist. LEXIS 124257.

[6] Ibid at 7. 

[7] Ibid at 9. 

[8] The Sedona Conference, “The Sedona Canada Principles Addressing Electronic Discovery, Second Edition” (2016) 

    17:1 Sedona Conf. J 2015 at 223. 

[9] Supra note 2 at 11. 

[10]Supra note 2 at 7-8. 

[11] Brian D. Hall, “The Impact of Smart and Wearable Technology on Trade Secret Protection and E-Discovery”(2017) 33:1 ABA J of Labour & Employment L 79 at 85. 

[12] Ibid at 86. 

[13] Supra note 3. 

Protecting Canada’s Cyber Security: Implications of An Act Respecting Cyber Security (ARCS).

Mahnoor Khalid, a 1L law student at the University of Calgary and writing contributor for the Tech and Law Association addresses the implications of ARCS on protecting Canada’s cyber security

On June 14, 2022, the House of Commons introduced Bill C-26: An Act respecting cyber security, amending the Telecommunications Act and making consequential amendments to other Acts, which proposed cybersecurity requirements that protect Canada’s security and public safety. The objective of the bill is to enhance security in industries that are essential, more effectively minimise cyber risk across industries subject to federal regulation and provide the Canadian government additional legal authority to react to threats.[1] The bill is split into two parts. The first part of the bill amends the Telecommunications Act to ensure the security of the Canadian telecommunications system is up to date. The second part enacts the Critical Cyber Systems Protection Act (CCSPA).[2]

Not only does the first part of the bill add the security of the Canadian telecommunications system as an objective of the Canadian telecommunication policy, it also gives new powers to the Governor in Council and the Minister of Industry. Section 15 of the Telecommunications Act is amended drastically to provide these powers. Whereas most general powers granted under the application of the bill will be undertaken by the minister, the governor is given broad powers under Part I to intervene directly on any matters essential in maintaining the security of the country. This may be especially beneficial in a situation that warrants an emergency response.[3]

Amendments made to the Telecommunications Act will impact and prohibit the use of certain products and services provided by specific telecommunications providers. Section 15.1(1) gives the Governor in Council broad powers in securing the Canadian telecommunications systems “against the threat of interference, manipulation or disruption.”[4]Additionally, s. 15.2(2) states that “the Minister may, by order, direct a telecommunications service provider to do anything or refrain from doing anything.”[5]

Part I also amends the Telecommunications Act to implement monetary penalties in cases of contravention to provisions made under section 15.1 or 15.2. For each instance of non-compliance, telecommunication service providers may be liable for administrative monetary penalties (AMPs) of up to $10 million, and up to $15 million for any subsequent contraventions. 

Part II of the bill introduces the Critical Cyber Systems Protection Act  (CCSPA). The purpose of the CCSPA is to protect “critical cyber systems”, which is essentially a system that would drastically impact the continuity or security of a vital service if compromised. These critical cyber systems exist in the federally regulated private sector. Schedule 1 to the Act then goes on to list what the vital services and systems are. Each vital service is assigned a relevant regulator. If passed, the CCSPA would apply to a class of “designated operators”, which are listed in Schedule 2 of the Act and will carry on work subject to federal jurisdiction and the regulator for that service. 

There are certain things that come into play if the CCSPA is passed. Firstly, if an operator is not granted an extension by the regulator, they have 90 days to establish a cybersecurity system that meets the four purposes outlined in s. 5 of the Act and notify the regulator of this system. Each program should be annually reviewed, and the regulator should be informed to any changes. Secondly, the CCSPA gives authority to the governor to direct operators in complying with the Act and meeting the purpose of protecting a critical cyber system. The operator should also take any reasonable steps to identify and mitigate risks associated with their supply chain. The Act goes on to discuss cybersecurity incidents that can interfere with the continuity and security of “critical cyber systems”, in the event of which an operator must report the incident immediately. 

The following is a chart that shows the list of vital services listed in Schedule 1 of the Act and the relevant government regulator for each.[6]

Vital Service or SystemRegulator
Telecommunications servicesMinister of Industry
Transportation systems within the legislative authority of the ParliamentMinister of Transport
Interprovincial or international pipeline and power line systemsCanada Energy Regulator
Nuclear energy systemsCanadian Nuclear Safety Commission
Banking systemsOffice of the Superintendent of Financial Institutions
Clearing and settlement systemsBank of Canada

While it is evident that changes to cybersecurity are inevitable, Bill C-26 depicts how the Canadian government is adapting to this evolving landscape. Cybersecurity is a rapidly changing field and as Bill C-26 goes through subsequent readings and amendments, it will be interesting to see the progressing jurisdictions regarding the country’s cybersecurity risks. The second reading of Bill C-26 is currently in progress, and we will continue to see how it is perceived and changed. 


[1] “Cybersecurity and Bill C-26: How to Comply with Confidence.” BDO Canada, 29 Aug. 2022, online: <https://www.bdo.ca/en-ca/insights/advisory/cybersecurity/cybersecurity-and-bill-c-26-how-to-comply-with-confidence/&gt;.

[2] Bill C-26, An Act respecting cyber security, amending the Telecommunications Act and making consequential amendments to other Acts, 1st Sess, 44th Parl, 2022. 

[3] Ahmad, Imran, et al. “Bill C-26: The Increased Importance of Canadian Cybersecurity.” Norton Rose Fulbright, 22 June 2022, online: <https://www.nortonrosefulbright.com/en-ca/knowledge/publications/42944ded/bill-c26-the-increased-importance-of-canadian-cybersecurity&gt;.

[4] Supra note 2.

[5] Ibid.

[6] Supra note 1.

Interview with Michael Grantmyre – Partner, Emerging and High Growth Companies Group (EHG), Osler

This interview has been edited for length and clarity 

Michael Grantmyre
Michael Grantmyre

UCalgary Tech Law Association’s Co-Founder Justin Lentz recently had the pleasure of interviewing Michael Grantmyre, a partner at Osler, Hoskin & Harcourt LLP whose practice focuses on mergers and acquisitions, corporate finance, and securities law in the Emerging and High Growth Companies sector.

Michael obtained his JD degree from the University of Windsor and started his career at LaBarge Weinstein LLP in Ottawa. After several years of successful practice at LaBarge, Michael joined Osler in Toronto, before finally moving to Osler’s Calgary office to bring his experience to support Osler’s building out of the Emerging and High Growth Companies Group in Alberta. 

Justin: Could you speak a bit about what the day-to-day of working with emerging and high growth companies is like?

Michael: A helpful place to start would be understanding the client that fits our practice model. From a company perspective, this would be a company that requires a lot of capital to grow very quickly, and that doesn’t necessarily mean only tech companies. It can mean many more things other than tech, and we’ve got a lot of clients that really fit our group’s thesis and that are clients of the firm, which are not all tech companies as you would expect, but they still fit that model. Obviously, tech is the clearest application of when those factors exist, so a lot of the clients of the firm are tech companies and a lot of the investors that we work with invest in tech companies.

The emerging and high growth companies group is really our way of describing [the] life cycle of a client; our support of them from an ideation or pre-incorporation stage where this is literally just an idea on a board … to the most late possible stage company, such as, for example in Calgary, Neo Financial, Attabotics, Symend, Athenian and, in Toronto, Wealthsimple. And these are companies that, not in all cases, but in some cases, we’ve started working with from their date of incorporation.

That’s really the thesis of our group and kind of where we’re looking to slot in. I think the biggest misconception of us as a firm is that we’re this big, Bay Street headquartered law firm that only wants to work with you if you are a BMO or CIBC. On the contrary, something that we’re proud of is that we’re able to figure out ways to work with companies well before they are generating revenue. So, the idea of emerging and high growth companies is this idea of plugging in from the emerging early stage through the high growth life cycle … we’re along for the ride for the whole thing, and that’s really what this EHG description is meant to capture.

Justin: Can you speak a bit about the differences in market structure between Toronto and Calgary?

Michael: What’s really drawn me to [Calgary] and why I jumped at the opportunity to help lead our firm’s practice in the prairies is the ecosystem of Alberta, Saskatchewan, and Manitoba; it’s exceptional, but it’s still more nascent than Toronto and Vancouver. I see tremendous value in that and, frankly, purpose, as a service provider, in being able to take all of the exposure to everything that I’ve gained [in Toronto and Ottawa] and bringing it here. I saw this ability to … contribute to awesome projects that people are working on in the prairies. There weren’t Neos five years ago, we saw Solium before it was acquired…there was this kind of flickering light of just what was possible in Alberta and, more broadly, the prairies. The Prairies has really come into its own as an ecosystem

Justin: Are there any specific sectors within tech that you think will grow a lot in Calgary over the next 5 years or so?

Michael: Generative AI has been a pretty hot topic and we’ve seen some outsized rounds of financing and valuations for companies in that space, so that seems to be taking a bit of the stage right now in terms of what people are really focused on. 

But, I mean, from an ecosystem perspective, it’s interesting, right? Because you’ve got this local flavor of the oil and gas and industry which kind of takes on its own vibe; [we’re seeing] a lot of these energy transition companies and venture arms of oil and gas corporations investing in these companies too, and that’s really cool to see, but we have an extremely vibrant AI scene as well. Look at Edmonton, for example, AltaML is a leader from an artificial intelligence perspective, right in North America, and they’re spinning out some awesome, awesome ventures.

You’ve also got FinTech companies. Like I said, Neo is an easy, easy one to cite. You’ve got super cool digital health companies like Purpose Med and DrugBank. You’ve got awesome life science companies. I should also give the University of Calgary a shout out as well because they’ve been spinning out some awesome life sciences and bio tech companies.

This province doesn’t just have oil and gas related tech, and that’s not what people are exclusively looking for either. So the market here, the investors here, the advisors here, are kind of all in on creating a truly diverse system of companies, which is really cool to see.

Justin: If a law student really wanted to get involved in that tech ecosystem you were talking about, where should they start?

Michael: I would say to reach out to lawyers at all different kinds of firms … Get involved in the community. I think that’s by far the best exercise you can undertake as a student trying to understand what a career in law in this particular space means, because all of those people out there that I mentioned, those are all your constituents that you need to serve and make happy.

[Just get a] sense of people’s aspirations. What are they excited about? What are they looking for? What are their pain points with lawyers? We’re not all great all the time, so what can you do better? Tech is really interesting in that you don’t just dabble in this; you don’t just kind of show up to the office and push through contracts, you’re getting exciting work. People want you to be part of the community. They want you to be part of the team when you’re working with clients. And, from my perspective, that’s the joy of this practice. They want you to be the friend and the therapist and the business advisor and also do some legal related work. I think that that’s the greatest compliment when you’re actually getting those opportunities to be that close to your clients. So, you know, it is almost kind of crazy to think, if people want that from you, why would you not get out into that community?

Get to know people. Build bonds. Just because they’re your clients doesn’t mean they can’t become your friends. Think about it from that perspective, because I think, when you’re on other people’s wavelengths…when you’re at that point, you’re at your best for them. So, I’d recommend that. The most important thing to focus on is just to get out there, get out in the community and see where you can help. There’s a lot of work to do out here.

Cybersecurity Implications of Autonomous Vehicle Regulation in Canada

Justin Chia, a 1L law student at the University of Calgary and writing contributor for the Tech and Law Association takes a look at the implications autonomous vehicle regulation poses to cybersecurity in Canada

Autonomous vehicles have become one of the most fascinating and relevant technological innovations of the 21st century with prominent automotive manufacturers, such as Tesla and General Motors, making advancements in self-driving technology. The promising socioeconomic and environmental benefits of autonomous vehicles have forced governments around the globe to contemplate the potential legal implications of these technological trends. Canada’s federalist structure creates an interesting legal dynamic in this regard, as the federal government, the provinces and the territories each have legislative authority to enact regulatory schemes addressing autonomous vehicles.[1]

Multiple provinces, including Ontario and Quebec, have introduced pilot programs to regulate safety testing for self-driving vehicles. Other provinces, namely Manitoba and Saskatchewan, have amended existing provincial legislation for the purposes of regulating the issuance of permits for the autonomous vehicle operation in the province.[2] Conversely, the federal government has introduced regulations under the Motor Vehicle Safety Act that regulate the import of autonomous vehicles into the country. [3] Underlying each of the provincial and federal legislation relating to the testing of self-driving vehicles is a concern for data privacy, which constitutes a concurrent area of legislation.

Data collection is a critical component of safety testing and monitoring for autonomous vehicles and along with it comes data privacy concerns. Autonomous vehicle manufacturers collect driver location and biometric data for the purposes of enhancing safety and performance. Lawmakers and advocacy groups have raised concerns that sensitive consumer data may be sold or distributed to 3rd parties, thereby jeopardizing the privacy of consumers.[4] The collection and disclosure of personal information by AV manufacturers in Canada is subject to the Personal Information Protection and Electronics Documents Act (PIPEDA).[5] The PIPEDA serves as an overarching framework with provinces and territories having the option of complying with the federal regulatory scheme or creating their own data privacy regulatory frameworks at the local levels. 

Alberta, BC and Quebec have opted out of the federal regulatory framework by enacting similar data privacy legislation at the provincial level that applies to AV data collection. Canada’s fragmented data privacy regulatory structure resembles its carbon pricing regulatory structure, which has led to an increasing number of legal challenges brought by the provinces against the federal legislation, thereby complicating efforts to establish a uniform set of regulations at the national level. 

A fragmented data privacy regulatory structure has significant implications for efforts to address cyber security risks at the national level. Advancements in automation and artificial intelligence, as well as emerging technologies, each of which make self-driving vehicles such an appealing innovation, also expose driver data to increased cyber threats. A critical aspect of countering these cyber threats is the development and implementation of national standards to regulate the types of data and modes of collection within the automotive industry. The enforcement of this regulatory framework will require coordination and collaboration between the federal government and the provinces and territories to ensure uniform compliance throughout the country. 

The fragmented nature of the existing regulatory landscape constrains cross-government collaboration and reduces the likelihood of uniform compliance with industry data privacy standards. Parliament’s recent tabling of the Digital Charter Implementation Act (Bill c-27), which would repeal and replace the PIPEDA, is an effort to establish a national regulatory framework in response to emerging cybersecurity concerns.[6] The new data privacy legislation would require automotive and artificial intelligence companies to comply with standards aimed at ensuring greater transparency in how these companies collect and dispose of driver data. The extent to which a national regulatory scheme is effective in maintaining data privacy and mitigating cybersecurity threats is largely contingent on the provinces and territories’ active participation in the scheme. 


[1] Marin Leci et al, “Autonomous Vehicle Laws In Canada: Provincial & Territorial Regulatory Landscape” (29 December 2021), online (blog): BLG <https://www.blg.com/en/insights/2021/12/autonomous-vehicle-laws-in-canada-provincial-and-territorial-regulatory-landscape>.

[2] Ibid.

[3] Motor Vehicle Safety Act, SC 1993, c 16.

[4] “The Privacy Implications of Autonomous Vehicles” (17 July 2017), online (blog): Norton Rose Fulbright https://www.dataprotectionreport.com/2017/07/the-privacy-implications-of-autonomous-vehicles/>.

[5] Personal Information Protection and Electronics Documents Act, SC 2000, c 5. 

[6] Bill C-27, An Act to Enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st sess, 44th Parl, 2022. 

Artificial Intelligence and Civil Liability

Cindy Chen, a 1L law student at the University of Calgary and writing contributor for the Tech and Law Association takes a look at the impact of artificial intelligence on civil liability in Canada.


Artificial Intelligence (AI) is a transformative technology that provides numerous benefits for our society. By utilizing advanced data analysis algorithms, AI systems are now capable of semi-autonomous decision-making, creating practical benefits for numerous sectors ranging from art to health care and supply chain management. The rapid advancement in AI technology is largely funded by private investments. In 2021 alone, the Research and Development investment amounts to around 93.5 billion dollars.[1] While increasingly more private companies are investing in this booming industry, the law for the industry remains unclear. According to a recent Stanford study, in 2021 alone, the US proposed 131 bills to regulate the industry, but only 2% ultimately became law.[2]

On June 16, 2022, Canada did the first reading of Bill C-27, which is one of Canada’s first attempts to regulate parts of the AI industry.[3] As Bill C-27 is still up for Parliament debate, there is a lack of clarity for remedy and damages sustained by private individuals using AI technology. With no legislation in place, injured individuals will likely have to rely on fault-based civil liability laws for remedy. However, similar to medical and toxic substance cases in negligence analysis, it is difficult for liability to be established, as the unique characteristics of AI systems create an unfair evidentiary burden for injured plaintiffs. Such difficulty may be present when an AI system is acting autonomously through analyzing multiple inputs. It becomes challenging for an injured plaintiff to pinpoint the responsible developer that caused the AI output that led to the damage.[4]

On September 28, 2022, the EU Commission released a directive that outlined a new liability framework for AI-related civil liability claims to resolve this issue. Although this is not a binding law, it nevertheless provides a possible way to circumvent the evidentiary problem in AI-related claims.  

Rebuttable Presumption of Causality

One of the most significant deviations from standard fault base negligence analysis is the rebuttable presumption of causality. Unlike the ordinary “but for” test, the EU commission is proposing a presumption for a causal link between the fault of the defendant and the output or failure produced by the AI system if the plaintiff can show that:

  1. the defendant’s behavior deviated from the standard of care laid out by relevant legislation;
  2. it is reasonably likely that the fault has influenced the output or failure of the AI system; and
  3. the claimant had shown that the output or failure of the AI system led to the damage.[5]

This presumption can effectively lower the evidentiary burden for the injured plaintiff, as the plaintiff no longer has to prove direct causation between system input and ultimate injury. The onus for rebutting this presumption is placed on the defendant, who likely has better technical knowledge than the plaintiff and will be in a better position to rebut this presumption. It is important to remember that this suggestion has not yet been adopted by any court. As the AI industry evolves, hopefully, there will be more legislation and common law practices to ultimately create more certainty for manufacturers and consumers. 


[1] Standard University Human-Centered Artificial Intelligence: The AI Index 2022 Annual Report (2022), online (pdf) <https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf&gt;

[2] Ibid, at page 175

[3] Parliament of Canada: Bill C-27 (2022), online (pdf) <https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading&gt;

[4] European Commission. Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (2022), online (pdf) <https://ec.europa.eu/info/sites/default/files/1_1_197605_prop_dir_ai_en.pdf&gt;

[5] Ibid, at Article 4

Club Sponsorship


UCalgary Tech and Law Association is excited to announce our platinum sponsor, Osler, Hoskin & Harcourt LLP.

Osler Hoskin & Harcourt LLP is a leading law firm with a singular focus – your business. From Toronto, Montréal, Calgary, Ottawa, Vancouver and New York, we advise our Canadian, U.S. and international clients on an array of domestic and cross-border legal issues. Our “one firm” approach draws on the expertise of more than 500 lawyers to provide responsive, proactive and practical legal solutions driven by your business needs. For over 160 years, we’ve built a reputation for solving problems, removing obstacles, and providing the answers you need, when you need them. It’s law that works.

Osler’s Emerging and High Growth Companies Group (EHG) and Venture Capital teams help clients recognize, develop and realize innovative venture capital opportunities. The firm acts for more than 1,500 early, growth and late-stage ventures and venture investors across Canada, the United States and around the world. In 2022, the firm released its inaugural Deal Points Report: Venture Capital Financings, which includes data from more than 300 venture capital and growth equity preferred share financings completed by Osler from 2019 to 2021, representing more than US$5.7 billion in total transaction value. The EHG team works in collaboration with the firm’s other leading practice groups including Technology, Intellectual Property, Digital Assets and Blockchain, Fintech, and Privacy and Data Management. We’re also a leading firm for M&A transactions and IPOs.