AI tools are meant to enhance human capabilities, increase the quality of our outputs and improve the value we deliver to those interacting with us.
Ethical challenges are not vastly different from what they’ve been:
Did you have the permission to gather the data? Were you transparent
about collection and use? Did you allow people a choice in taking part?
Did you consider biases and gaps?
Etc
Jeff Jarvis - professor at City University of New York
To the extent of our capabilities we will ensure we are aligned with the EU AI Act until applicable rules and laws are defined. After that, it will be an obligation so, needless to say, we will apply those rules.
We prohibit the usage of AI applications that manipulate, discriminate, or harm individuals.
We use tools like ChatGPT, Copilot, Gemini, etc., for inspiration, text suggestions, stylistic editing, translation, summarisation, technical term searches, tone editing, shortening, subtitle suggestions and other content work. Large Language Models and Specialised LLMs can also solve mathematical and logical problems, create, edit, add tables, categorise, find Excel functions, etc.
We do not rely on these tools to discover or verify factual information.
We double-check all text and information AI generates. Our default mindset when evaluating the output is scepticism, to give space and attention to our critical thinking.
When using publicly available LLMs, we protect personal data and sensitive internal information. We never enter the personal data of our clients or partners and any other sensitive internal information.
We do not generate and use AI-generated text and images mechanically and without reason.
We don’t want to increase production at the cost of quality nor replace originality with mediocrity. We will use AI tools to help us add quality to our texts, images and videos.
Individuals remain responsible for the texts, images and videos created with the help of AI and sign them on their behalf.
Using text prompts and instructions, these tools can create detailed and thought-provoking images, artwork, and infographics that can be used for a variety of applications, including advertising & commercial use.
When prompted, AI tools may then replicate protected elements, symbols, styles, or typography, without any correct accreditation to their rightful owner(s). Even if a system doesn’t generate new pictures that directly copy the original artwork, photographs, or branding, they may produce similar or closely inspired derivatives, which can still be a legal concern.
We do not generate and use AI-generated images mechanically and without reason.
We don’t want to increase production at the cost of quality nor replace originality with mediocrity. AI tools help us add quality to our images and videos, and so those should always end up being better than if we would create them ourselves.
Individuals remain responsible for the images and videos created with the help of AI and sign them on their behalf.
We never generate images, videos or audio that could give a false impression of actual events.
When creating advertising material, we never prompt using concrete names of photographers or artists to replicate their exact visual style. If we want to replicate an exact style, we first seek approval from those photographers and artists.
We never generate images, videos or audio that could be confused with the medium of photography, video, or actual sound recording. An acceptable exception is when we want to illustrate the capabilities of AI tools for educational purposes, in which case we transparently include information about its use.