Technology Affects Everyone
Technology is not neutral. Every app, algorithm, and device has an impact on people, society, and the planet. Social media can connect friends across continents, but it can also spread misinformation and harm mental health. Artificial intelligence can diagnose diseases, but it can also discriminate against people based on biased training data.
As a computer scientist, you have a responsibility to think about the consequences of the technology you create and use. This topic explores the ethical, legal, cultural, environmental, and privacy issues surrounding modern technology — and it is a significant part of your GCSE exam.
Understanding these issues does not just help you pass an exam. It helps you become a responsible digital citizen who can make informed decisions about the technology that shapes your life.
This topic covers:
- Privacy, surveillance, and the digital divide
- UK laws: Data Protection Act, Computer Misuse Act, Copyright law, Freedom of Information Act, and RIPA
- Ethical issues: AI bias, autonomous vehicles, facial recognition, deepfakes, net neutrality, right to repair, digital addiction, and more
- Environmental impact: energy use, e-waste, rare earth mining, AI carbon footprint, smartphone lifecycle, and net zero targets
- Stakeholder analysis — identifying who is affected by technology decisions and considering multiple perspectives
- Open source vs proprietary software
- Key vocabulary for your exam
- Exam tips and strategies for discussion questions
Privacy, Surveillance, and Access
Privacy and Data Collection
Every time you use the internet, you leave a digital footprint. Companies collect vast amounts of data about you, often without you realising how much they know:
- Search engines track every search you make and use it to build a profile of your interests
- Social media platforms record what you post, like, share, and how long you look at different content
- Online shops track what you browse, buy, put in your basket, and even what you look at but do not buy
- Smartphones can track your location, monitor your app usage, and access your contacts and photos
- Smart home devices (voice assistants, smart TVs) may listen to conversations and collect usage data
This data is incredibly valuable. Companies use it to target advertisements, personalise recommendations, and even influence your behaviour. It can also be sold to third parties, leaked in data breaches, or accessed by governments.
Many people are unaware of the full extent of data collection. When you agree to a website’s terms and conditions or accept cookies, you may be consenting to far more data collection than you realise. Studies have shown that most people do not read privacy policies — and even those who do often struggle to understand what they are agreeing to, because the language used is deliberately complex.
Surveillance and Monitoring
Technology enables surveillance on a scale never before possible:
- CCTV cameras are widespread in UK cities — the UK has one of the highest numbers of surveillance cameras per person in the world
- Workplace monitoring — Employers can monitor emails, browsing, and even keystrokes on company devices
- Government surveillance — Intelligence agencies can collect communications data for national security purposes (the UK’s Investigatory Powers Act 2016, sometimes called the “Snooper’s Charter”)
- Social media monitoring — What you post can be viewed by schools, employers, and law enforcement
The debate centres on security vs privacy: surveillance can help prevent crime and terrorism, but it also means innocent people are constantly being watched. Where should the line be drawn?
This is one of the most important ethical debates in technology. Those who favour surveillance argue that if you have “nothing to hide, you have nothing to fear.” However, privacy advocates point out that surveillance can have a chilling effect on free speech — people behave differently when they know they are being watched, even if they are doing nothing wrong. History also shows that surveillance powers, once granted, are rarely reduced and can be misused by future governments.
The Digital Divide
The digital divide is the gap between people who have access to technology and the internet and those who do not. This divide can be based on:
- Income — Not everyone can afford a computer, smartphone, or broadband
- Location — Rural areas often have slower or no broadband. Many developing countries have limited internet infrastructure
- Age — Older people may lack the skills or confidence to use technology
- Disability — Websites and software that are not designed to be accessible exclude people with visual, hearing, or motor impairments
As more services move online (banking, healthcare, government services, job applications), the digital divide becomes a serious equality issue. People without access risk being left behind.
The COVID-19 pandemic highlighted the digital divide sharply. When schools moved to online learning, students without reliable internet access or their own devices fell behind. Similarly, people who could not access online grocery delivery or digital health services were at a disadvantage. This demonstrated that the digital divide is not just an inconvenience — it can have real consequences for education, health, and quality of life.
Key Topics in Detail
The Law
Several UK laws are relevant to computer science. You need to know the key points of each for your exam. Being able to name the correct law and explain its key provisions is essential for high marks. Pay close attention to the differences between each law — students often confuse them.
Data Protection Act 2018 / UK GDPR
This law controls how personal data (any information that can identify a living person) is collected, stored, and used. It replaced the Data Protection Act 1998 and incorporates the EU’s General Data Protection Regulation (GDPR) into UK law.
Key principles — Personal data must be:
- Processed lawfully, fairly, and transparently
- Collected for a specific, stated purpose and not used for anything else
- Adequate, relevant, and not excessive — only collect what you actually need
- Accurate and kept up to date
- Not kept longer than necessary
- Kept secure (protected against unauthorised access, loss, or damage)
Your rights under the DPA 2018 / UK GDPR include:
- The right to access any personal data an organisation holds about you
- The right to have inaccurate data corrected
- The right to have your data deleted (“the right to be forgotten”)
- The right to object to your data being used for marketing
- The right to know what data is being collected and why
Real-world cases:
- Facebook / Cambridge Analytica (2018) — The political consulting firm Cambridge Analytica harvested personal data from millions of Facebook users without their consent. The data was used to build psychological profiles and target political advertisements during the 2016 US election and the EU referendum. Facebook was fined £500,000 by the ICO (the maximum under the old 1998 Act). This scandal was a major catalyst for stronger data protection enforcement worldwide.
- British Airways data breach (2018) — Hackers stole the personal and financial details of approximately 400,000 customers from the British Airways website and app. The ICO initially proposed a record £183 million fine under the new GDPR rules, which was later reduced to £20 million. The case demonstrated that organisations have a legal duty to keep customer data secure, and that failing to do so carries serious consequences.
Computer Misuse Act 1990
This law makes it illegal to access or modify computer systems without permission. It was introduced in response to growing cyber crime and has three main offences:
- Unauthorised access to computer material — Accessing a computer system without permission (e.g. hacking into someone’s email). Up to 2 years in prison.
- Unauthorised access with intent to commit a further offence — Hacking with the aim of committing another crime (e.g. accessing a bank system to steal money). Up to 5 years in prison.
- Unauthorised modification of computer material — Changing data without permission (e.g. deleting files, planting a virus, encrypting data with ransomware). Up to 10 years in prison.
A later amendment added: Making, supplying, or obtaining tools for use in computer misuse offences (e.g. creating and distributing hacking tools or malware).
Real-world cases:
- Gary McKinnon (2001–2002) — A British systems administrator who hacked into 97 US military and NASA computers from his bedroom in London. He claimed he was looking for evidence of UFOs, but US authorities said he caused over £500,000 worth of damage by deleting critical files and shutting down military networks. The US sought his extradition, but in 2012 the Home Secretary blocked it on human rights grounds due to his Asperger syndrome diagnosis. The case highlighted the international reach of the CMA and the difficulty of prosecuting cross-border cyber crime.
- TalkTalk data breach (2015) — The telecoms company TalkTalk suffered a major cyber attack in which hackers accessed the personal data of nearly 157,000 customers, including bank account details. The attack was carried out partly by teenagers. TalkTalk was fined a then-record £400,000 by the ICO for failing to implement basic security measures. The company also lost over 100,000 customers and an estimated £60 million in costs. This case showed that both the attackers (prosecuted under the CMA) and the company (prosecuted under data protection law) could face legal consequences.
Copyright, Designs and Patents Act 1988
This law protects the creators of original work, including:
- Software and computer programs
- Music, films, and photographs
- Written content (books, articles, website text)
- Art and designs
It is illegal to copy, distribute, or modify someone’s copyrighted work without their permission. This includes pirating software, downloading films illegally, or copying code from a website without a licence. Penalties can include fines and imprisonment.
An important distinction exists between copyright and licensing. A copyright holder can choose to license their work under different terms. For example, a Creative Commons licence allows others to use and share work under certain conditions, while an open source software licence allows anyone to view and modify the source code. Understanding licensing is important for any computer scientist, as using code or content without an appropriate licence is a breach of copyright law.
Real-world cases:
- Napster (1999–2001) — Napster was one of the first widely-used peer-to-peer file-sharing services, allowing users to share MP3 music files for free. The Recording Industry Association of America (RIAA) sued Napster for facilitating mass copyright infringement. The courts ruled against Napster, and it was forced to shut down in 2001. The case established that platforms which enable copyright infringement can be held legally responsible, and it reshaped the entire music industry, eventually leading to legal streaming services like Spotify.
- The Pirate Bay (2009) — The founders of The Pirate Bay, a Swedish torrent website, were found guilty of assisting in copyright infringement. They were sentenced to one year in prison each and ordered to pay damages of approximately £2.7 million. Despite the convictions, the website continued to operate through mirror sites, illustrating how difficult it is to enforce copyright law on the global internet.
Freedom of Information Act 2000
This law gives the public the right to request information held by public authorities (such as government departments, the NHS, councils, police, and schools). The organisation must respond within 20 working days. Some information can be withheld if it relates to national security, personal data, or commercial interests.
The FOIA is an important tool for transparency and accountability. Journalists and members of the public regularly use it to uncover information about government spending, policy decisions, and public services. For example, FOI requests have revealed data about hospital waiting times, school inspection results, and local council expenditure. The Act does not apply to private companies — it only covers public bodies.
Regulation of Investigatory Powers Act 2000 (RIPA)
RIPA regulates the powers of public bodies to carry out surveillance and investigation. It covers the interception of communications (phone calls, emails, internet activity), the use of covert surveillance and informants, and the acquisition of communications data. RIPA was designed to ensure that surveillance is carried out lawfully and proportionately, but it has been criticised for giving authorities overly broad powers. It has since been supplemented by the Investigatory Powers Act 2016, which requires internet service providers to store records of websites visited by every citizen for 12 months.
Ethical Issues in Computing
Ethics in computing refers to the moral principles that guide decisions about how technology is designed, developed, and used. Unlike laws, which are enforced by authorities, ethics are about what should be done, not just what must be done. A technology can be perfectly legal but still raise serious ethical concerns. The following sections explore some of the most important ethical debates in modern computing.
AI Bias
Artificial intelligence systems learn from data. If that data contains biases (conscious or unconscious), the AI will reproduce and amplify those biases. For example:
- A recruitment AI trained on historical hiring data might discriminate against women if the company previously hired mostly men
- Facial recognition systems have been shown to be less accurate for people with darker skin tones because they were trained primarily on lighter-skinned faces
- Predictive policing algorithms may unfairly target certain communities based on biased historical crime data
The ethical question is: who is responsible when an AI makes a biased or unfair decision? The programmer? The company? The data provider? There is currently no clear legal framework for assigning responsibility in these cases, which is why governments around the world are developing AI regulation. The EU has proposed an AI Act that would classify AI systems by risk level and impose stricter rules on high-risk applications such as recruitment, healthcare, and law enforcement.
Autonomous Vehicles
Self-driving cars raise difficult moral questions. If an accident is unavoidable, how should the car decide what to do? Should it prioritise the safety of its passengers, or pedestrians? Who is legally responsible if an autonomous vehicle causes an accident — the passenger, the manufacturer, or the programmer who wrote the code?
These questions are not just theoretical. Several fatal accidents involving autonomous and semi-autonomous vehicles have already occurred. Current UK law requires a human driver to be in control at all times, but as the technology develops, the law will need to evolve. The UK government has been developing a framework for self-driving vehicles that considers questions of liability, insurance, and data protection.
Facial Recognition
Facial recognition technology can identify individuals from camera footage. It has legitimate uses (unlocking phones, finding missing persons, verifying identity at airports) but raises serious concerns:
- Mass surveillance without consent — Cameras equipped with facial recognition can track individuals as they move through public spaces without their knowledge or agreement
- Accuracy issues — Studies have shown the technology is significantly less accurate for people with darker skin tones and for women, raising serious discrimination concerns
- Potential for authoritarian misuse — In some countries, facial recognition is used to monitor and control citizens, suppress protests, and target minority groups
- The erosion of anonymity — The ability to move through public spaces without being identified is considered by many to be a fundamental right. Widespread facial recognition effectively removes this right
In 2020, the Court of Appeal ruled that South Wales Police’s use of automated facial recognition technology was unlawful because it did not adequately consider the impact on privacy and equality. This landmark ruling showed that even law enforcement must carefully balance the benefits of facial recognition against its risks to civil liberties.
Deepfake Technology
Deepfakes are AI-generated images, audio, or video that realistically imitate real people. The technology uses machine learning to swap faces, clone voices, or create entirely fabricated footage that is extremely difficult to distinguish from reality. Deepfakes raise several serious concerns:
- Misinformation — Deepfake videos of politicians or public figures saying things they never said could be used to manipulate elections and public opinion. During elections, a convincing fake video released at the right moment could influence millions of voters before fact-checkers have time to respond
- Identity theft and fraud — Criminals could use deepfake voice cloning to impersonate someone in phone calls to banks, employers, or family members. There have already been cases of fraudsters using AI-cloned voices to trick employees into transferring large sums of money
- Harassment — Deepfake technology has been used to create non-consensual intimate images of individuals, causing severe emotional harm. The UK’s Online Safety Act has made the sharing of such images a criminal offence
- Erosion of trust — If any video or audio recording can be faked, it becomes harder to trust genuine evidence, undermining journalism, the courts, and democratic debate. This is sometimes called the “liar’s dividend” — even real footage can be dismissed as a deepfake
Detecting deepfakes is an active area of research, but the technology is advancing faster than detection methods. Some experts argue that digital watermarking and content provenance standards (which track how and where digital content was created) may be more effective than trying to detect fakes after the fact.
Algorithmic Decision-Making
Algorithms increasingly make decisions that affect people’s lives — from determining who gets a loan or a job interview, to calculating insurance premiums, to deciding what content you see on social media. This raises important ethical questions:
- Accountability — If an algorithm denies someone a mortgage or a university place, who is responsible? The company? The developer? The algorithm itself?
- Transparency — Many algorithms are “black boxes” whose inner workings are not understood even by their creators. Should people have the right to an explanation of how an automated decision about them was made?
- Fairness — Algorithms trained on historical data can perpetuate existing inequalities. For example, if a postcode-based algorithm is used to set insurance prices, people in poorer areas may be charged more regardless of their individual circumstances.
Under the UK GDPR, individuals have the right not to be subject to decisions based solely on automated processing if those decisions significantly affect them. They also have the right to request human review of automated decisions.
A notable example occurred in 2020 when an algorithm was used to calculate A-level grades in England after exams were cancelled. The algorithm systematically downgraded students at state schools in disadvantaged areas while upgrading students at private schools. After widespread protests, the algorithm-generated grades were replaced with teacher-assessed grades. This case demonstrated the real-world harm that biased algorithms can cause and highlighted the importance of transparency and accountability in automated decision-making.
Net Neutrality
Net neutrality is the principle that all internet traffic should be treated equally by internet service providers (ISPs). Under net neutrality, ISPs cannot charge more for access to certain websites, slow down competitors’ services, or give priority to content from companies that pay extra.
- Arguments for net neutrality: It ensures a level playing field, so small businesses and start-ups can compete with large corporations. It protects free speech and innovation. Without it, ISPs could decide which websites load quickly and which do not, effectively controlling what people can access online. Imagine if your ISP slowed down a small independent news website because it could not afford to pay for “fast lane” access.
- Arguments against net neutrality: ISPs argue they should be able to manage their networks efficiently. Some say that charging more for high-bandwidth services (like video streaming) is fair because those services use more resources. Without regulation, ISPs could invest more in infrastructure. They also argue that some traffic management is necessary — for example, prioritising video calls for medical consultations over casual video streaming.
In the US, net neutrality rules were repealed in 2017, sparking intense debate. In the EU and UK, net neutrality protections remain in place, though they are subject to ongoing review. The debate is a good example of how technology policy involves balancing the interests of multiple stakeholders: consumers, businesses, ISPs, content creators, and regulators.
Right to Repair
The right to repair movement argues that consumers should be able to repair their own electronic devices, or take them to independent repair shops, rather than being forced to use the manufacturer’s expensive repair services. Many manufacturers make their devices deliberately difficult to repair by using proprietary screws, gluing components together, or refusing to supply spare parts.
- Arguments for: Reduces e-waste, saves consumers money, extends the useful life of devices, supports local repair businesses, and reduces dependence on manufacturers
- Arguments against: Manufacturers argue that opening devices could compromise safety and security, void warranties, and that complex modern devices require specialist knowledge to repair safely
The EU and several US states have introduced right-to-repair legislation, and there is growing pressure for the UK to follow.
Digital Addiction
Many technology platforms are deliberately designed to be as addictive as possible. Features like infinite scrolling, autoplay, push notifications, streaks, and “like” counts exploit psychological mechanisms to keep users engaged for as long as possible. This raises the question: are tech companies responsible for the addictive nature of their products?
- Research has linked excessive screen time to anxiety, depression, sleep disruption, and reduced attention spans, particularly in young people
- Some former tech executives have publicly stated that their products were designed to be addictive. One former Facebook executive described the platform’s feedback loops as “exploiting a vulnerability in human psychology”
- Governments are beginning to act — for example, the UK’s Online Safety Act places duties on platforms to protect children from harmful content and addictive design features
- Others argue that individuals bear personal responsibility for managing their own technology use, and that regulation risks limiting innovation and free expression
- Some technology companies have introduced “digital wellbeing” features (such as screen time reports and app timers), but critics argue these are largely cosmetic and do not address the underlying addictive design patterns
This is an area where the interests of different stakeholders clearly conflict: technology companies profit from maximising user engagement, while users (particularly young people) may suffer harm from excessive use. The question of where responsibility lies — with the individual, the company, or the regulator — is one of the most important ethical debates in modern computing.
Social Media Impact
Social media has transformed communication, but it also raises concerns:
- Mental health — Studies link excessive social media use to anxiety, depression, and poor body image, particularly among young people. Research by the Royal Society for Public Health found that platforms like Instagram and Snapchat had the most negative impact on young people’s mental health
- Misinformation — False information can spread rapidly, influencing elections and public health decisions. During the COVID-19 pandemic, the spread of health misinformation on social media was so significant that the World Health Organisation described it as an “infodemic”
- Cyberbullying — Online platforms can be used to harass and intimidate. Unlike traditional bullying, cyberbullying can happen 24 hours a day, reach a much wider audience, and be very difficult to escape
- Echo chambers — Algorithms show you content you already agree with, reinforcing existing views and making it harder to see other perspectives. This can polarise public opinion and make constructive debate more difficult
- Addiction — Platforms are designed to be as engaging as possible, using techniques like infinite scrolling and notifications to keep you coming back
Positive impacts of social media:
- Global connection — People can maintain relationships across distances and connect with communities that share their interests
- Civic engagement — Social media has been instrumental in organising social movements, raising awareness of injustice, and enabling political participation
- Creative expression — Platforms give individuals a space to share art, music, writing, and ideas with a global audience without needing a publisher or studio
- Access to information — Breaking news, educational content, and expert knowledge are more accessible than ever before
- Support networks — People dealing with health conditions, disabilities, or difficult circumstances can find communities of support online
Environmental Impact
Energy Consumption
- Data centres (which power cloud computing, streaming, social media, and AI) consume enormous amounts of electricity and require extensive cooling systems
- The IT industry accounts for an estimated 2–4% of global carbon emissions — similar to the aviation industry
- Streaming a single hour of video uses a surprising amount of energy across the network
- Cryptocurrency mining (e.g. Bitcoin) consumes vast amounts of electricity
Carbon Footprint of AI
Training large artificial intelligence models requires enormous computational power running for weeks or months. Researchers have estimated that training a single large AI model can emit as much carbon dioxide as five cars produce over their entire lifetimes. As AI becomes more widespread, its energy demands are growing rapidly. This has led to calls for “green AI” — developing more energy-efficient algorithms and training models using renewable energy sources.
Rare Earth Minerals
Modern electronic devices depend on rare earth minerals such as lithium, cobalt, tantalum, and neodymium. These are used in batteries, screens, circuit boards, and speakers. Mining these minerals causes significant environmental and human harm:
- Environmental damage — Mining operations can destroy habitats, contaminate water supplies, and release toxic chemicals into the soil
- Human cost — In some countries, particularly the Democratic Republic of Congo, cobalt is mined using child labour in dangerous conditions
- Limited supply — Many of these minerals are finite resources. As global demand for electronics grows, supply pressures will increase
- Geopolitical concerns — A small number of countries control most of the world’s supply, creating dependency and potential for conflict
Lifecycle of a Smartphone
Understanding the full lifecycle of a device helps illustrate the true environmental cost of technology:
- Mining — Raw materials (lithium, cobalt, gold, copper, rare earth elements) are extracted from the ground, often in developing countries, causing habitat destruction and pollution
- Manufacture — Components are assembled in factories, typically in East Asia, using large amounts of energy and water. The manufacturing process alone accounts for around 70–80% of a smartphone’s total carbon footprint
- Transport — Finished devices are shipped globally, adding further emissions
- Use — The phone consumes electricity for charging and relies on data centres and network infrastructure for its services
- Disposal — When discarded, the device becomes e-waste. If not recycled properly, toxic materials can leach into the environment. Only a small percentage of materials in a typical smartphone are currently recovered through recycling
The average person in the UK replaces their smartphone every 2–3 years. Extending the life of a device by even one year significantly reduces its overall environmental impact. This is why the right to repair movement and the push against planned obsolescence are so closely linked to environmental sustainability — making devices last longer is one of the most effective ways to reduce the technology industry’s environmental footprint.
E-Waste
- Electronic waste (e-waste) is one of the fastest-growing waste streams in the world
- Devices often contain toxic materials (lead, mercury, cadmium) that can leach into soil and water if not disposed of properly
- Much e-waste from wealthy countries is shipped to developing nations, where it is processed in unsafe conditions
- Planned obsolescence — Some manufacturers deliberately design products to become outdated or stop working after a certain time, encouraging consumers to buy replacements
Net Zero and the Tech Industry
Net zero means achieving a balance between the greenhouse gases produced and the greenhouse gases removed from the atmosphere. Many major technology companies have pledged to reach net zero by 2030 or 2040. This involves:
- Powering data centres with renewable energy (solar, wind, hydroelectric)
- Improving the energy efficiency of hardware and software
- Investing in carbon offset programmes (such as tree planting or carbon capture technology)
- Designing products for longevity and recyclability rather than planned obsolescence
- Reducing the carbon footprint of supply chains, from raw material extraction to delivery
However, critics point out that many net zero pledges rely heavily on carbon offsets rather than genuine emissions reductions, and that the rapid growth of AI and cloud computing is increasing the industry’s energy demands faster than efficiency gains can compensate.
For students, understanding net zero is important because exam questions may ask you to evaluate whether technology companies are doing enough to reduce their environmental impact. A strong answer would acknowledge the positive steps being taken while critically assessing whether pledges are backed by meaningful action.
Benefits of Technology for the Environment
- Remote working — Reduces commuting, lowering transport emissions
- Smart energy systems — AI and IoT devices can optimise energy use in homes and businesses
- Environmental monitoring — Sensors and satellites track deforestation, pollution, and climate change
- Digital replacements — E-books, online banking, and digital tickets reduce paper and plastic waste
- Renewable energy management — Computers optimise the distribution of solar and wind power
Stakeholder Analysis
A stakeholder is anyone who is affected by, or has an interest in, a particular decision or technology. In your exam, you may be asked to identify and discuss the different stakeholders involved in a technology-related scenario. Good answers consider multiple perspectives and explain how different groups are affected in different ways.
How to carry out a stakeholder analysis:
- Identify the stakeholders — Who is affected? Think broadly: users, non-users, businesses, employees, regulators, vulnerable groups, the wider community
- Consider each perspective — How does each stakeholder benefit or lose? What are their concerns?
- Evaluate conflicts — Where do the interests of different stakeholders clash?
- Reach a conclusion — Weigh up the arguments and give your own reasoned opinion
Worked example: Social media age restrictions
Suppose the government is considering raising the minimum age for social media use from 13 to 16. Who are the stakeholders?
| Stakeholder | Perspective |
|---|---|
| Young people (13–15) | Would lose access to platforms they use for socialising, creativity, and staying informed. May feel excluded from their peer group. Some would find ways to circumvent restrictions. |
| Parents and carers | Many would welcome the protection of children from harmful content, cyberbullying, and addictive design. Some may prefer to make the decision themselves rather than have the government decide. |
| Schools | Reduced cyberbullying could improve wellbeing and reduce safeguarding incidents. However, social media is also used as a communication and learning tool. |
| Social media companies | Would lose a significant portion of their user base and advertising revenue. Would face the technical challenge of reliably verifying users’ ages. |
| Advertisers | Would lose access to a valuable demographic. May need to find alternative channels to reach young people. |
| Mental health charities | Likely to support the move, citing evidence of social media’s negative impact on young people’s mental health. May argue for additional measures beyond age restrictions. |
A strong exam answer would discuss several of these perspectives, identify where they conflict, and then offer a balanced conclusion supported by reasoning.
Other scenarios where stakeholder analysis is useful:
- A hospital introducing an AI diagnostic system — Stakeholders: patients, doctors, nurses, the hospital trust, the AI company, medical regulators, insurance companies
- A city installing smart CCTV with facial recognition — Stakeholders: residents, local businesses, the police, civil liberties groups, the technology provider, tourists
- A school banning mobile phones — Stakeholders: students, parents, teachers, the school leadership, mobile phone companies, child safety organisations
Practise identifying stakeholders for different scenarios — it is a skill that comes up frequently in exam questions and will strengthen any discussion answer.
Open Source vs Proprietary Software
| Feature | Open Source | Proprietary |
|---|---|---|
| Source code | Publicly available — anyone can view, modify, and share it | Kept secret — only the company can view and modify it |
| Cost | Usually free to use | Usually requires a licence fee or subscription |
| Examples | Linux, Firefox, LibreOffice, Python, VLC | Windows, Microsoft Office, Adobe Photoshop, macOS |
| Support | Community-driven (forums, documentation, volunteers) | Official customer support from the company |
| Customisation | Can be modified to suit your needs | Limited to what the company provides |
| Security | Many eyes on the code means bugs are found quickly, but vulnerabilities are also visible to attackers | Fewer people reviewing the code, but vulnerabilities are hidden from public view |
| Quality | Varies — depends on the community | Often polished with professional QA testing |
Key Vocabulary
Make sure you understand and can use all of the following terms confidently in your exam answers. Using precise technical vocabulary demonstrates strong subject knowledge and helps you communicate your ideas clearly. Try covering the “Definition” column and testing yourself on each term.
| Term | Definition |
|---|---|
| Ethics | A set of moral principles that govern what is considered right and wrong behaviour, particularly in relation to technology decisions |
| Privacy | The right of individuals to control what personal information about them is collected, stored, and shared |
| Digital Footprint | The trail of data left behind when using the internet, including both active contributions (posts, uploads) and passive data collection (browsing history, location tracking) |
| Surveillance | The monitoring of people’s behaviour, activities, or communications, often using technology such as CCTV, phone tapping, or internet monitoring |
| Digital Divide | The gap between those who have access to modern technology and the internet and those who do not, often based on income, location, age, or disability |
| Stakeholder | Any person, group, or organisation that is affected by, or has an interest in, a particular technology decision or system |
| Data Protection Act (DPA 2018) | UK law that controls how personal data is collected, stored, and used, incorporating GDPR principles |
| Computer Misuse Act (CMA 1990) | UK law that makes it illegal to access or modify computer systems without authorisation |
| Copyright | Legal protection given to creators of original works (software, music, writing, art) that prevents others from copying or distributing their work without permission |
| GDPR | General Data Protection Regulation — EU regulation (incorporated into UK law) that strengthens individuals’ rights over their personal data |
| ICO | Information Commissioner’s Office — the UK body responsible for enforcing data protection law and investigating breaches |
| Open Source | Software whose source code is freely available for anyone to view, modify, and distribute |
| Proprietary | Software whose source code is owned by a company and kept secret; typically requires a licence to use |
| E-Waste | Electronic waste — discarded electrical and electronic devices, often containing toxic materials |
| Planned Obsolescence | The deliberate design of products to become outdated or non-functional after a certain period, forcing consumers to buy replacements |
| AI Bias | When an artificial intelligence system produces unfair or discriminatory outcomes because of biased training data or flawed design |
| Net Neutrality | The principle that all internet traffic should be treated equally by internet service providers, without favouring or blocking particular websites or services |
| Cyberbullying | The use of technology (social media, messaging, gaming) to harass, threaten, or intimidate another person |
| Echo Chamber | A situation where algorithms show users only content that reinforces their existing beliefs, limiting exposure to alternative viewpoints |
| Digital Citizenship | The responsible and ethical use of technology, including respecting others online, protecting personal data, and thinking critically about digital content |
Test Yourself
Click on each question to reveal the answer. Try to answer in your head first before looking at the model answer. Questions 11–15 are longer discussion questions similar to what you will encounter in your exam — practise writing full paragraph answers for these.
Answer: Any three from: browsing history, search queries, location data, purchase history, social media posts and likes, email content, contact lists, device information, app usage patterns, or biometric data (e.g. fingerprint or face scan). Companies use this data for targeted advertising, personalised recommendations, and building profiles of user behaviour.
Answer: The digital divide is the gap between people who have access to technology and the internet and those who do not. Contributing factors include: (1) Income — not everyone can afford devices or broadband. (2) Location — rural areas may have poor internet infrastructure. Other valid factors: age (older people may lack digital skills), disability (inaccessible websites), and education level.
Answer: Any two from: personal data must be (1) processed lawfully, fairly, and transparently, (2) collected for a specific, stated purpose, (3) adequate, relevant, and not excessive, (4) accurate and kept up to date, (5) not kept longer than necessary, (6) kept secure against unauthorised access or loss.
Answer: (1) Unauthorised access to computer material — accessing a system without permission (e.g. hacking into an email account). (2) Unauthorised access with intent to commit a further offence — hacking to commit another crime (e.g. fraud or theft). (3) Unauthorised modification of computer material — changing data without permission (e.g. planting malware, deleting files, deploying ransomware).
Answer: The Copyright, Designs and Patents Act 1988. The film is copyrighted material, and downloading it without the copyright holder’s permission is illegal. This applies to all copyrighted content, including software, music, films, books, and images.
Answer: AI bias occurs when an artificial intelligence system produces unfair or discriminatory results because it was trained on biased data. Example: A facial recognition system that is less accurate at identifying people with darker skin tones because it was primarily trained on images of lighter-skinned faces. Other examples: recruitment AI that discriminates against women, or predictive policing algorithms that unfairly target certain communities.
Answer: (1) Energy consumption — Data centres, cryptocurrency mining, and the wider IT industry consume enormous amounts of electricity, contributing significantly to carbon emissions. (2) E-waste — Electronic devices contain toxic materials (lead, mercury, cadmium) and are often disposed of unsafely, polluting the environment. Other valid answers: planned obsolescence leading to unnecessary waste, resource depletion from mining rare minerals for components, carbon footprint of training AI models.
Answer: Planned obsolescence is when manufacturers deliberately design products to become outdated, break, or slow down after a certain period, encouraging consumers to buy replacements. It is an ethical concern because it (1) creates unnecessary e-waste that harms the environment, (2) wastes consumers’ money, and (3) disproportionately affects people who cannot afford frequent upgrades. Some companies have been fined for deliberately slowing down older devices through software updates.
Answer: Advantages: (1) Usually free to use, reducing costs. (2) Source code can be viewed and modified, allowing customisation and community-driven improvements. Disadvantages: (1) May lack official customer support — users rely on community forums and documentation. (2) Quality can vary because development depends on volunteer contributors, and some projects may be abandoned.
Answer: Positive: Social media enables global communication and connection — people can stay in touch with friends and family regardless of distance, and it gives a platform for raising awareness about important issues (e.g. charitable causes, social movements). Negative: Social media has been linked to increased anxiety, depression, and poor body image, particularly among young people. Algorithms that maximise engagement can create echo chambers, spread misinformation, and encourage addictive behaviour. (Any well-reasoned positive and negative impact is acceptable.)
Answer: Arguments for: Facial recognition could improve security by ensuring only authorised personnel enter the building, reducing the risk of theft or unauthorised access. It could also streamline access — employees would not need to carry ID cards or remember codes. Arguments against: It raises serious privacy concerns — employees may feel uncomfortable being constantly monitored and tracked. The system may have accuracy issues, particularly for people of certain ethnic backgrounds, which could be discriminatory. Employees have not necessarily consented to biometric data collection, and the company would need to comply with the DPA 2018 regarding how this sensitive data is stored and used. There is also the question of proportionality — is facial recognition necessary, or would a less invasive method (e.g. key cards) achieve the same goal? A strong answer would weigh up both sides and offer a reasoned conclusion.
Answer: The “right to be forgotten” (formally called the “right to erasure”) means that individuals can request that an organisation deletes their personal data. This right applies when the data is no longer necessary for the purpose it was collected, the individual withdraws their consent, or the data was processed unlawfully. For example, a person could ask a search engine to remove links to outdated or irrelevant personal information about them. However, this right is not absolute — organisations can refuse if the data is needed for legal obligations, public health, archiving in the public interest, or the exercise of free expression. The principle was established by a landmark EU court ruling in 2014 and is now part of UK GDPR.
Answer: Cost: Open source software (e.g. Linux, LibreOffice) is usually free, which could save the school significant money on licence fees. Proprietary software (e.g. Windows, Microsoft Office) requires paid licences but may come with education discounts. Support: Proprietary software typically comes with official customer support, which may be important for a school without dedicated IT staff. Open source relies on community support. Compatibility: Students may use proprietary software at home, so learning different software at school could cause confusion. However, learning open source alternatives teaches transferable skills. Customisation: Open source can be tailored to the school’s exact needs. Security: Both have strengths — open source benefits from community review, while proprietary software may receive regular professional security updates. Training: Staff may need training on unfamiliar open source tools. A good answer would consider at least three of these factors and offer a balanced conclusion.
Answer: (1) Keep devices for longer — Instead of upgrading smartphones or laptops every year or two, individuals can extend the life of their devices by maintaining them, replacing batteries, and using protective cases. This reduces e-waste and the environmental cost of manufacturing new devices. (2) Recycle e-waste responsibly — Rather than throwing old devices in the bin, individuals should take them to designated e-waste recycling centres or return them to the manufacturer. This ensures that toxic materials are safely processed and valuable materials (gold, copper, rare earth elements) are recovered. Other valid answers include: reducing streaming quality, using energy-efficient settings, buying refurbished devices, or supporting companies with strong environmental policies.
Answer: (1) Lawful and transparent processing — The company must have a lawful basis for collecting young people’s data and must clearly explain what data is being collected and why, using language that young people can understand. (2) Data minimisation — The company should only collect data that is adequate, relevant, and necessary for the stated purpose. They should not collect excessive information about young users. (3) Security — The company must keep the data secure using appropriate technical and organisational measures, protecting it from unauthorised access, accidental loss, or data breaches. Additional valid points: the company must obtain verifiable parental consent for children under 13 (under the Age Appropriate Design Code), data must not be kept longer than necessary, and young people have the right to have their data deleted on request.
Exam Tips for Issues & Impact Questions
This topic appears in every GCSE Computer Science paper, and the questions often carry high marks. Here is how to maximise your marks:
- Always discuss both sides. Whether the question asks about surveillance, AI, environmental impact, or any other issue, the examiner wants to see a balanced argument. State the benefits, state the drawbacks, and then give your conclusion. Even if you feel strongly about one side, you must acknowledge the other perspective to earn full marks.
- Name specific laws. Do not just say “this is against the law.” State which law — the Data Protection Act 2018, the Computer Misuse Act 1990, or the Copyright, Designs and Patents Act 1988. This shows precise knowledge and earns more marks. If relevant, mention who enforces the law (e.g. the ICO enforces data protection).
- Use real-world examples. Mentioning cases like the Cambridge Analytica scandal, the British Airways data breach, or the TalkTalk hack demonstrates that you understand how these issues apply in practice. Even a brief reference to a real case makes your answer stand out.
- Identify stakeholders. When discussing the impact of a technology or policy, explain how different groups are affected. Consider users, businesses, governments, vulnerable groups, and the wider community. This shows the examiner that you can think beyond the obvious.
- Conclude with your own reasoned opinion. After presenting both sides, state what you believe and explain why. The examiner is looking for critical thinking, not just memorised facts. Phrases like “On balance, I believe…” or “Considering the evidence, the strongest argument is…” signal a mature, evaluative response.
Common command words in this topic:
- “Describe” — Give a clear, factual account. Say what something is or what it does.
- “Explain” — Say what something is and why it matters. Give reasons and consequences.
- “Discuss” — Present multiple viewpoints, consider arguments for and against, and reach a conclusion. This is where stakeholder analysis and real-world examples are most important.
- “Evaluate” — Weigh up the strengths and weaknesses and make a judgement. Similar to “discuss” but with a stronger emphasis on your conclusion.
Video Resources
These Craig 'n' Dave videos cover the ethical, legal, environmental, and emerging technology topics you need to know.
Past Paper Questions
Practise these exam-style questions. Click each question to reveal the mark scheme.
Explain what is meant by the 'digital divide' and give two examples. 4 marks
Mark scheme:
- The digital divide is the gap between those who have access to technology and those who don't (1 mark)
- Example 1: Economic — some people cannot afford computers/internet (1 mark)
- Example 2: Geographic — rural areas may have poor broadband (1 mark)
- Example 3: Generational — older people may struggle with technology (1 mark)
Describe two offences under the Computer Misuse Act 1990. 4 marks
Mark scheme:
- Unauthorised access to a computer system (1 mark) — e.g. hacking into someone's account (1 mark)
- Unauthorised modification of data (1 mark) — e.g. spreading a virus or deleting files (1 mark)
- Unauthorised access with intent to commit further offences (1 mark) — e.g. accessing bank systems to steal money (1 mark)
Explain two ways that computers have a negative impact on the environment. 4 marks
Mark scheme:
- Energy consumption: Data centres and devices use large amounts of electricity (1 mark), often from non-renewable sources contributing to climate change (1 mark)
- E-waste: Discarded devices contain toxic materials (1 mark) that pollute the environment when not disposed of properly (1 mark)
Explain what is meant by 'algorithmic bias' and why it is a concern. 4 marks
Mark scheme:
- Algorithmic bias is when AI systems produce unfair/prejudiced results (1 mark)
- This happens because AI learns from existing data (1 mark)
- If the training data contains human biases, the AI replicates them (1 mark)
- This can lead to discrimination in areas like recruitment, lending, or criminal justice (1 mark)
Your Responsibilities as a Digital Citizen
You are part of the first generation to grow up entirely in the digital age. That comes with both incredible opportunities and real responsibilities. Being a responsible digital citizen is not just about following rules — it is about making thoughtful, informed choices about how you use technology and how you treat others online. Here is what being a responsible digital citizen looks like:
- Respect others online — Treat people with the same kindness and respect online as you would face to face. Think before you post, comment, or share.
- Protect your own data — Use strong passwords, enable two-factor authentication, and be cautious about what personal information you share.
- Respect intellectual property — Do not pirate software, music, or films. Credit creators when you use their work. Understand open source licences.
- Think critically — Not everything you read online is true. Check sources, question claims, and be aware of how algorithms shape what you see.
- Consider the environment — Recycle old devices properly. Do not upgrade your phone every year if you do not need to. Be mindful of your digital energy consumption.
- Speak up — If you see cyberbullying, report it. If you spot biased technology, question it. You have more power to drive change than you might think.
The technology industry needs people who can code, but it also desperately needs people who can think critically about the impact of what they create. The fact that you are studying both the technical and ethical sides of computing puts you in a strong position to shape technology for the better.
Remember: the code you write and the systems you build will be used by real people. Always ask yourself: Who benefits from this technology? Who might be harmed? Is this fair? How can I make it better?
As you continue your studies and eventually enter the workforce, you will face real decisions about these issues. Whether you are building a website, designing an app, training an AI model, or choosing which technologies to use, the knowledge you have gained in this topic will help you make responsible, informed choices. The best computer scientists are not just technically skilled — they are also ethical, thoughtful, and aware of the wider impact of their work.
Further Reading
- BBC Bitesize — Edexcel GCSE Computer Science — Comprehensive revision resources covering all topics including ethics, law, and environmental impact
- Isaac Computer Science — Impacts of Technology — In-depth materials on the ethical, legal, and environmental issues in computing
- GCSE Topic 5: Issues & Impact — Full Edexcel specification coverage with discussion questions and case studies
- Revision Activities — Memory match and connection wall games for ethics and law