Technology Affects Everyone

Technology is not neutral. Every app, algorithm, and device has an impact on people, society, and the planet. Social media can connect friends across continents, but it can also spread misinformation and harm mental health. Artificial intelligence can diagnose diseases, but it can also discriminate against people based on biased training data.

As a computer scientist, you have a responsibility to think about the consequences of the technology you create and use. This topic explores the ethical, legal, cultural, environmental, and privacy issues surrounding modern technology — and it is a significant part of your GCSE exam.

Understanding these issues does not just help you pass an exam. It helps you become a responsible digital citizen who can make informed decisions about the technology that shapes your life.

This topic covers:

Why This Topic Matters for Your Exam Issues and Impact questions appear on every GCSE Computer Science paper and often carry the highest marks. These are discussion questions that require you to present balanced arguments, use specific examples, and give your own reasoned opinion. Unlike many other topics in computer science, there are rarely “right or wrong” answers here — what matters is the quality of your reasoning and your ability to consider multiple viewpoints.

Privacy, Surveillance, and Access

Privacy and Data Collection

Every time you use the internet, you leave a digital footprint. Companies collect vast amounts of data about you, often without you realising how much they know:

This data is incredibly valuable. Companies use it to target advertisements, personalise recommendations, and even influence your behaviour. It can also be sold to third parties, leaked in data breaches, or accessed by governments.

Many people are unaware of the full extent of data collection. When you agree to a website’s terms and conditions or accept cookies, you may be consenting to far more data collection than you realise. Studies have shown that most people do not read privacy policies — and even those who do often struggle to understand what they are agreeing to, because the language used is deliberately complex.

Key Concept: Digital Footprint Your digital footprint is the trail of data you leave behind when you use the internet. It includes things you deliberately share (posts, photos, comments) and information collected automatically (browsing history, location data, device information). Once data is online, it can be very difficult — sometimes impossible — to remove permanently.

Surveillance and Monitoring

Technology enables surveillance on a scale never before possible:

The debate centres on security vs privacy: surveillance can help prevent crime and terrorism, but it also means innocent people are constantly being watched. Where should the line be drawn?

This is one of the most important ethical debates in technology. Those who favour surveillance argue that if you have “nothing to hide, you have nothing to fear.” However, privacy advocates point out that surveillance can have a chilling effect on free speech — people behave differently when they know they are being watched, even if they are doing nothing wrong. History also shows that surveillance powers, once granted, are rarely reduced and can be misused by future governments.

The Digital Divide

The digital divide is the gap between people who have access to technology and the internet and those who do not. This divide can be based on:

As more services move online (banking, healthcare, government services, job applications), the digital divide becomes a serious equality issue. People without access risk being left behind.

The COVID-19 pandemic highlighted the digital divide sharply. When schools moved to online learning, students without reliable internet access or their own devices fell behind. Similarly, people who could not access online grocery delivery or digital health services were at a disadvantage. This demonstrated that the digital divide is not just an inconvenience — it can have real consequences for education, health, and quality of life.

Exam Tip: Questions on the digital divide often ask you to discuss both sides. Technology brings many benefits, but those benefits are not shared equally. Always consider who is included and who is excluded by a particular technology or policy.

Key Topics in Detail

The Law

Several UK laws are relevant to computer science. You need to know the key points of each for your exam. Being able to name the correct law and explain its key provisions is essential for high marks. Pay close attention to the differences between each law — students often confuse them.

Data Protection Act 2018 / UK GDPR

This law controls how personal data (any information that can identify a living person) is collected, stored, and used. It replaced the Data Protection Act 1998 and incorporates the EU’s General Data Protection Regulation (GDPR) into UK law.

Key principles — Personal data must be:

Your rights under the DPA 2018 / UK GDPR include:

Key Concept: Data Protection Act 2018 The DPA 2018 protects individuals’ personal data. Organisations that break the law can face fines of up to £17.5 million or 4% of their annual global turnover, whichever is greater. The law is enforced by the Information Commissioner’s Office (ICO).

Real-world cases:

Computer Misuse Act 1990

This law makes it illegal to access or modify computer systems without permission. It was introduced in response to growing cyber crime and has three main offences:

  1. Unauthorised access to computer material — Accessing a computer system without permission (e.g. hacking into someone’s email). Up to 2 years in prison.
  2. Unauthorised access with intent to commit a further offence — Hacking with the aim of committing another crime (e.g. accessing a bank system to steal money). Up to 5 years in prison.
  3. Unauthorised modification of computer material — Changing data without permission (e.g. deleting files, planting a virus, encrypting data with ransomware). Up to 10 years in prison.

A later amendment added: Making, supplying, or obtaining tools for use in computer misuse offences (e.g. creating and distributing hacking tools or malware).

Real-world cases:

Copyright, Designs and Patents Act 1988

This law protects the creators of original work, including:

It is illegal to copy, distribute, or modify someone’s copyrighted work without their permission. This includes pirating software, downloading films illegally, or copying code from a website without a licence. Penalties can include fines and imprisonment.

An important distinction exists between copyright and licensing. A copyright holder can choose to license their work under different terms. For example, a Creative Commons licence allows others to use and share work under certain conditions, while an open source software licence allows anyone to view and modify the source code. Understanding licensing is important for any computer scientist, as using code or content without an appropriate licence is a breach of copyright law.

Real-world cases:

Freedom of Information Act 2000

This law gives the public the right to request information held by public authorities (such as government departments, the NHS, councils, police, and schools). The organisation must respond within 20 working days. Some information can be withheld if it relates to national security, personal data, or commercial interests.

The FOIA is an important tool for transparency and accountability. Journalists and members of the public regularly use it to uncover information about government spending, policy decisions, and public services. For example, FOI requests have revealed data about hospital waiting times, school inspection results, and local council expenditure. The Act does not apply to private companies — it only covers public bodies.

Regulation of Investigatory Powers Act 2000 (RIPA)

RIPA regulates the powers of public bodies to carry out surveillance and investigation. It covers the interception of communications (phone calls, emails, internet activity), the use of covert surveillance and informants, and the acquisition of communications data. RIPA was designed to ensure that surveillance is carried out lawfully and proportionately, but it has been criticised for giving authorities overly broad powers. It has since been supplemented by the Investigatory Powers Act 2016, which requires internet service providers to store records of websites visited by every citizen for 12 months.

Common Mistake: Students often confuse the Data Protection Act with the Computer Misuse Act. Remember: the DPA is about protecting personal data (how information about people is collected and stored). The CMA is about preventing unauthorised access to computer systems (hacking, malware, and tampering). They are two different laws covering two different issues.

Ethical Issues in Computing

Ethics in computing refers to the moral principles that guide decisions about how technology is designed, developed, and used. Unlike laws, which are enforced by authorities, ethics are about what should be done, not just what must be done. A technology can be perfectly legal but still raise serious ethical concerns. The following sections explore some of the most important ethical debates in modern computing.

AI Bias

Artificial intelligence systems learn from data. If that data contains biases (conscious or unconscious), the AI will reproduce and amplify those biases. For example:

The ethical question is: who is responsible when an AI makes a biased or unfair decision? The programmer? The company? The data provider? There is currently no clear legal framework for assigning responsibility in these cases, which is why governments around the world are developing AI regulation. The EU has proposed an AI Act that would classify AI systems by risk level and impose stricter rules on high-risk applications such as recruitment, healthcare, and law enforcement.

Autonomous Vehicles

Self-driving cars raise difficult moral questions. If an accident is unavoidable, how should the car decide what to do? Should it prioritise the safety of its passengers, or pedestrians? Who is legally responsible if an autonomous vehicle causes an accident — the passenger, the manufacturer, or the programmer who wrote the code?

These questions are not just theoretical. Several fatal accidents involving autonomous and semi-autonomous vehicles have already occurred. Current UK law requires a human driver to be in control at all times, but as the technology develops, the law will need to evolve. The UK government has been developing a framework for self-driving vehicles that considers questions of liability, insurance, and data protection.

Facial Recognition

Facial recognition technology can identify individuals from camera footage. It has legitimate uses (unlocking phones, finding missing persons, verifying identity at airports) but raises serious concerns:

In 2020, the Court of Appeal ruled that South Wales Police’s use of automated facial recognition technology was unlawful because it did not adequately consider the impact on privacy and equality. This landmark ruling showed that even law enforcement must carefully balance the benefits of facial recognition against its risks to civil liberties.

Deepfake Technology

Deepfakes are AI-generated images, audio, or video that realistically imitate real people. The technology uses machine learning to swap faces, clone voices, or create entirely fabricated footage that is extremely difficult to distinguish from reality. Deepfakes raise several serious concerns:

Detecting deepfakes is an active area of research, but the technology is advancing faster than detection methods. Some experts argue that digital watermarking and content provenance standards (which track how and where digital content was created) may be more effective than trying to detect fakes after the fact.

Algorithmic Decision-Making

Algorithms increasingly make decisions that affect people’s lives — from determining who gets a loan or a job interview, to calculating insurance premiums, to deciding what content you see on social media. This raises important ethical questions:

Under the UK GDPR, individuals have the right not to be subject to decisions based solely on automated processing if those decisions significantly affect them. They also have the right to request human review of automated decisions.

A notable example occurred in 2020 when an algorithm was used to calculate A-level grades in England after exams were cancelled. The algorithm systematically downgraded students at state schools in disadvantaged areas while upgrading students at private schools. After widespread protests, the algorithm-generated grades were replaced with teacher-assessed grades. This case demonstrated the real-world harm that biased algorithms can cause and highlighted the importance of transparency and accountability in automated decision-making.

Net Neutrality

Net neutrality is the principle that all internet traffic should be treated equally by internet service providers (ISPs). Under net neutrality, ISPs cannot charge more for access to certain websites, slow down competitors’ services, or give priority to content from companies that pay extra.

In the US, net neutrality rules were repealed in 2017, sparking intense debate. In the EU and UK, net neutrality protections remain in place, though they are subject to ongoing review. The debate is a good example of how technology policy involves balancing the interests of multiple stakeholders: consumers, businesses, ISPs, content creators, and regulators.

Right to Repair

The right to repair movement argues that consumers should be able to repair their own electronic devices, or take them to independent repair shops, rather than being forced to use the manufacturer’s expensive repair services. Many manufacturers make their devices deliberately difficult to repair by using proprietary screws, gluing components together, or refusing to supply spare parts.

The EU and several US states have introduced right-to-repair legislation, and there is growing pressure for the UK to follow.

Digital Addiction

Many technology platforms are deliberately designed to be as addictive as possible. Features like infinite scrolling, autoplay, push notifications, streaks, and “like” counts exploit psychological mechanisms to keep users engaged for as long as possible. This raises the question: are tech companies responsible for the addictive nature of their products?

This is an area where the interests of different stakeholders clearly conflict: technology companies profit from maximising user engagement, while users (particularly young people) may suffer harm from excessive use. The question of where responsibility lies — with the individual, the company, or the regulator — is one of the most important ethical debates in modern computing.

Social Media Impact

Social media has transformed communication, but it also raises concerns:

Positive impacts of social media:

Environmental Impact

Energy Consumption

Carbon Footprint of AI

Training large artificial intelligence models requires enormous computational power running for weeks or months. Researchers have estimated that training a single large AI model can emit as much carbon dioxide as five cars produce over their entire lifetimes. As AI becomes more widespread, its energy demands are growing rapidly. This has led to calls for “green AI” — developing more energy-efficient algorithms and training models using renewable energy sources.

Did You Know? Training a single large AI model can produce approximately 284 tonnes of carbon dioxide — roughly equivalent to the lifetime emissions of five average cars. As AI is integrated into more products and services, the environmental cost of this technology is becoming an increasingly important issue.

Rare Earth Minerals

Modern electronic devices depend on rare earth minerals such as lithium, cobalt, tantalum, and neodymium. These are used in batteries, screens, circuit boards, and speakers. Mining these minerals causes significant environmental and human harm:

Lifecycle of a Smartphone

Understanding the full lifecycle of a device helps illustrate the true environmental cost of technology:

  1. Mining — Raw materials (lithium, cobalt, gold, copper, rare earth elements) are extracted from the ground, often in developing countries, causing habitat destruction and pollution
  2. Manufacture — Components are assembled in factories, typically in East Asia, using large amounts of energy and water. The manufacturing process alone accounts for around 70–80% of a smartphone’s total carbon footprint
  3. Transport — Finished devices are shipped globally, adding further emissions
  4. Use — The phone consumes electricity for charging and relies on data centres and network infrastructure for its services
  5. Disposal — When discarded, the device becomes e-waste. If not recycled properly, toxic materials can leach into the environment. Only a small percentage of materials in a typical smartphone are currently recovered through recycling

The average person in the UK replaces their smartphone every 2–3 years. Extending the life of a device by even one year significantly reduces its overall environmental impact. This is why the right to repair movement and the push against planned obsolescence are so closely linked to environmental sustainability — making devices last longer is one of the most effective ways to reduce the technology industry’s environmental footprint.

Exam Tip: Questions about environmental impact often ask you to describe the full lifecycle of a device. Make sure you can explain each stage (extraction, manufacture, transport, use, disposal) and the environmental harm at each stage. Mentioning that manufacturing accounts for the majority of a device’s carbon footprint is an excellent point that many students miss.

E-Waste

Did You Know? The UK generates approximately 1.5 million tonnes of electronic waste per year, making it one of the largest producers of e-waste per person in the world. Only a fraction of this is properly recycled — the rest ends up in landfill or is exported to developing countries.

Net Zero and the Tech Industry

Net zero means achieving a balance between the greenhouse gases produced and the greenhouse gases removed from the atmosphere. Many major technology companies have pledged to reach net zero by 2030 or 2040. This involves:

However, critics point out that many net zero pledges rely heavily on carbon offsets rather than genuine emissions reductions, and that the rapid growth of AI and cloud computing is increasing the industry’s energy demands faster than efficiency gains can compensate.

For students, understanding net zero is important because exam questions may ask you to evaluate whether technology companies are doing enough to reduce their environmental impact. A strong answer would acknowledge the positive steps being taken while critically assessing whether pledges are backed by meaningful action.

Benefits of Technology for the Environment

Key Concept: Balancing Benefits and Harms Technology is neither purely good nor purely bad. The key is to weigh up the benefits against the harms. In exam answers, always discuss both sides of the argument and come to a reasoned conclusion. Show that you can think critically about the impact of technology on individuals, society, and the environment.

Stakeholder Analysis

A stakeholder is anyone who is affected by, or has an interest in, a particular decision or technology. In your exam, you may be asked to identify and discuss the different stakeholders involved in a technology-related scenario. Good answers consider multiple perspectives and explain how different groups are affected in different ways.

How to carry out a stakeholder analysis:

  1. Identify the stakeholders — Who is affected? Think broadly: users, non-users, businesses, employees, regulators, vulnerable groups, the wider community
  2. Consider each perspective — How does each stakeholder benefit or lose? What are their concerns?
  3. Evaluate conflicts — Where do the interests of different stakeholders clash?
  4. Reach a conclusion — Weigh up the arguments and give your own reasoned opinion

Worked example: Social media age restrictions

Suppose the government is considering raising the minimum age for social media use from 13 to 16. Who are the stakeholders?

StakeholderPerspective
Young people (13–15)Would lose access to platforms they use for socialising, creativity, and staying informed. May feel excluded from their peer group. Some would find ways to circumvent restrictions.
Parents and carersMany would welcome the protection of children from harmful content, cyberbullying, and addictive design. Some may prefer to make the decision themselves rather than have the government decide.
SchoolsReduced cyberbullying could improve wellbeing and reduce safeguarding incidents. However, social media is also used as a communication and learning tool.
Social media companiesWould lose a significant portion of their user base and advertising revenue. Would face the technical challenge of reliably verifying users’ ages.
AdvertisersWould lose access to a valuable demographic. May need to find alternative channels to reach young people.
Mental health charitiesLikely to support the move, citing evidence of social media’s negative impact on young people’s mental health. May argue for additional measures beyond age restrictions.

A strong exam answer would discuss several of these perspectives, identify where they conflict, and then offer a balanced conclusion supported by reasoning.

Other scenarios where stakeholder analysis is useful:

Practise identifying stakeholders for different scenarios — it is a skill that comes up frequently in exam questions and will strengthen any discussion answer.

Exam Tip: When a question asks you to “discuss” an issue, the examiner is looking for multiple perspectives. Identifying stakeholders and explaining how they are affected differently is one of the most effective ways to structure your answer. Always aim for at least two or three different viewpoints.
Did You Know? The “right to be forgotten” was established by a landmark EU Court of Justice ruling in 2014 (Google Spain v AEPD and Mario Costeja González). The court ruled that individuals have the right to request that search engines remove links to personal information that is “inadequate, irrelevant, or no longer relevant.” This principle is now enshrined in the UK GDPR as the “right to erasure.”

Open Source vs Proprietary Software

FeatureOpen SourceProprietary
Source codePublicly available — anyone can view, modify, and share itKept secret — only the company can view and modify it
CostUsually free to useUsually requires a licence fee or subscription
ExamplesLinux, Firefox, LibreOffice, Python, VLCWindows, Microsoft Office, Adobe Photoshop, macOS
SupportCommunity-driven (forums, documentation, volunteers)Official customer support from the company
CustomisationCan be modified to suit your needsLimited to what the company provides
SecurityMany eyes on the code means bugs are found quickly, but vulnerabilities are also visible to attackersFewer people reviewing the code, but vulnerabilities are hidden from public view
QualityVaries — depends on the communityOften polished with professional QA testing
Common Mistake: Students sometimes say open source software is “less secure because everyone can see the code.” This is an oversimplification. While the code is visible, this also means thousands of developers can spot and fix vulnerabilities. Many security experts argue that open source is more secure because of this transparency. The correct exam answer is to discuss both perspectives.

Key Vocabulary

Make sure you understand and can use all of the following terms confidently in your exam answers. Using precise technical vocabulary demonstrates strong subject knowledge and helps you communicate your ideas clearly. Try covering the “Definition” column and testing yourself on each term.

TermDefinition
EthicsA set of moral principles that govern what is considered right and wrong behaviour, particularly in relation to technology decisions
PrivacyThe right of individuals to control what personal information about them is collected, stored, and shared
Digital FootprintThe trail of data left behind when using the internet, including both active contributions (posts, uploads) and passive data collection (browsing history, location tracking)
SurveillanceThe monitoring of people’s behaviour, activities, or communications, often using technology such as CCTV, phone tapping, or internet monitoring
Digital DivideThe gap between those who have access to modern technology and the internet and those who do not, often based on income, location, age, or disability
StakeholderAny person, group, or organisation that is affected by, or has an interest in, a particular technology decision or system
Data Protection Act (DPA 2018)UK law that controls how personal data is collected, stored, and used, incorporating GDPR principles
Computer Misuse Act (CMA 1990)UK law that makes it illegal to access or modify computer systems without authorisation
CopyrightLegal protection given to creators of original works (software, music, writing, art) that prevents others from copying or distributing their work without permission
GDPRGeneral Data Protection Regulation — EU regulation (incorporated into UK law) that strengthens individuals’ rights over their personal data
ICOInformation Commissioner’s Office — the UK body responsible for enforcing data protection law and investigating breaches
Open SourceSoftware whose source code is freely available for anyone to view, modify, and distribute
ProprietarySoftware whose source code is owned by a company and kept secret; typically requires a licence to use
E-WasteElectronic waste — discarded electrical and electronic devices, often containing toxic materials
Planned ObsolescenceThe deliberate design of products to become outdated or non-functional after a certain period, forcing consumers to buy replacements
AI BiasWhen an artificial intelligence system produces unfair or discriminatory outcomes because of biased training data or flawed design
Net NeutralityThe principle that all internet traffic should be treated equally by internet service providers, without favouring or blocking particular websites or services
CyberbullyingThe use of technology (social media, messaging, gaming) to harass, threaten, or intimidate another person
Echo ChamberA situation where algorithms show users only content that reinforces their existing beliefs, limiting exposure to alternative viewpoints
Digital CitizenshipThe responsible and ethical use of technology, including respecting others online, protecting personal data, and thinking critically about digital content

Test Yourself

Click on each question to reveal the answer. Try to answer in your head first before looking at the model answer. Questions 11–15 are longer discussion questions similar to what you will encounter in your exam — practise writing full paragraph answers for these.

Q1: Name three types of personal data that companies might collect about you online.

Answer: Any three from: browsing history, search queries, location data, purchase history, social media posts and likes, email content, contact lists, device information, app usage patterns, or biometric data (e.g. fingerprint or face scan). Companies use this data for targeted advertising, personalised recommendations, and building profiles of user behaviour.

Q2: What is the digital divide? Give two factors that contribute to it.

Answer: The digital divide is the gap between people who have access to technology and the internet and those who do not. Contributing factors include: (1) Income — not everyone can afford devices or broadband. (2) Location — rural areas may have poor internet infrastructure. Other valid factors: age (older people may lack digital skills), disability (inaccessible websites), and education level.

Q3: State two principles of the Data Protection Act 2018.

Answer: Any two from: personal data must be (1) processed lawfully, fairly, and transparently, (2) collected for a specific, stated purpose, (3) adequate, relevant, and not excessive, (4) accurate and kept up to date, (5) not kept longer than necessary, (6) kept secure against unauthorised access or loss.

Q4: Describe the three offences under the Computer Misuse Act 1990.

Answer: (1) Unauthorised access to computer material — accessing a system without permission (e.g. hacking into an email account). (2) Unauthorised access with intent to commit a further offence — hacking to commit another crime (e.g. fraud or theft). (3) Unauthorised modification of computer material — changing data without permission (e.g. planting malware, deleting files, deploying ransomware).

Q5: A student downloads a film from an illegal website. Which law have they broken?

Answer: The Copyright, Designs and Patents Act 1988. The film is copyrighted material, and downloading it without the copyright holder’s permission is illegal. This applies to all copyrighted content, including software, music, films, books, and images.

Q6: Explain what AI bias is and give one real-world example.

Answer: AI bias occurs when an artificial intelligence system produces unfair or discriminatory results because it was trained on biased data. Example: A facial recognition system that is less accurate at identifying people with darker skin tones because it was primarily trained on images of lighter-skinned faces. Other examples: recruitment AI that discriminates against women, or predictive policing algorithms that unfairly target certain communities.

Q7: Give two environmental concerns related to the use of technology.

Answer: (1) Energy consumption — Data centres, cryptocurrency mining, and the wider IT industry consume enormous amounts of electricity, contributing significantly to carbon emissions. (2) E-waste — Electronic devices contain toxic materials (lead, mercury, cadmium) and are often disposed of unsafely, polluting the environment. Other valid answers: planned obsolescence leading to unnecessary waste, resource depletion from mining rare minerals for components, carbon footprint of training AI models.

Q8: What is planned obsolescence, and why is it an ethical concern?

Answer: Planned obsolescence is when manufacturers deliberately design products to become outdated, break, or slow down after a certain period, encouraging consumers to buy replacements. It is an ethical concern because it (1) creates unnecessary e-waste that harms the environment, (2) wastes consumers’ money, and (3) disproportionately affects people who cannot afford frequent upgrades. Some companies have been fined for deliberately slowing down older devices through software updates.

Q9: Give two advantages and two disadvantages of open source software compared to proprietary software.

Answer: Advantages: (1) Usually free to use, reducing costs. (2) Source code can be viewed and modified, allowing customisation and community-driven improvements. Disadvantages: (1) May lack official customer support — users rely on community forums and documentation. (2) Quality can vary because development depends on volunteer contributors, and some projects may be abandoned.

Q10: Discuss one positive and one negative impact of social media on society.

Answer: Positive: Social media enables global communication and connection — people can stay in touch with friends and family regardless of distance, and it gives a platform for raising awareness about important issues (e.g. charitable causes, social movements). Negative: Social media has been linked to increased anxiety, depression, and poor body image, particularly among young people. Algorithms that maximise engagement can create echo chambers, spread misinformation, and encourage addictive behaviour. (Any well-reasoned positive and negative impact is acceptable.)

Q11: A company wants to install facial recognition in its offices. Discuss the ethical implications. (4 marks)

Answer: Arguments for: Facial recognition could improve security by ensuring only authorised personnel enter the building, reducing the risk of theft or unauthorised access. It could also streamline access — employees would not need to carry ID cards or remember codes. Arguments against: It raises serious privacy concerns — employees may feel uncomfortable being constantly monitored and tracked. The system may have accuracy issues, particularly for people of certain ethnic backgrounds, which could be discriminatory. Employees have not necessarily consented to biometric data collection, and the company would need to comply with the DPA 2018 regarding how this sensitive data is stored and used. There is also the question of proportionality — is facial recognition necessary, or would a less invasive method (e.g. key cards) achieve the same goal? A strong answer would weigh up both sides and offer a reasoned conclusion.

Q12: Explain what the “right to be forgotten” means under the DPA 2018.

Answer: The “right to be forgotten” (formally called the “right to erasure”) means that individuals can request that an organisation deletes their personal data. This right applies when the data is no longer necessary for the purpose it was collected, the individual withdraws their consent, or the data was processed unlawfully. For example, a person could ask a search engine to remove links to outdated or irrelevant personal information about them. However, this right is not absolute — organisations can refuse if the data is needed for legal obligations, public health, archiving in the public interest, or the exercise of free expression. The principle was established by a landmark EU court ruling in 2014 and is now part of UK GDPR.

Q13: A school is choosing between open source and proprietary software for its computers. Discuss the factors they should consider.

Answer: Cost: Open source software (e.g. Linux, LibreOffice) is usually free, which could save the school significant money on licence fees. Proprietary software (e.g. Windows, Microsoft Office) requires paid licences but may come with education discounts. Support: Proprietary software typically comes with official customer support, which may be important for a school without dedicated IT staff. Open source relies on community support. Compatibility: Students may use proprietary software at home, so learning different software at school could cause confusion. However, learning open source alternatives teaches transferable skills. Customisation: Open source can be tailored to the school’s exact needs. Security: Both have strengths — open source benefits from community review, while proprietary software may receive regular professional security updates. Training: Staff may need training on unfamiliar open source tools. A good answer would consider at least three of these factors and offer a balanced conclusion.

Q14: Describe two ways individuals can reduce the environmental impact of their technology use.

Answer: (1) Keep devices for longer — Instead of upgrading smartphones or laptops every year or two, individuals can extend the life of their devices by maintaining them, replacing batteries, and using protective cases. This reduces e-waste and the environmental cost of manufacturing new devices. (2) Recycle e-waste responsibly — Rather than throwing old devices in the bin, individuals should take them to designated e-waste recycling centres or return them to the manufacturer. This ensures that toxic materials are safely processed and valuable materials (gold, copper, rare earth elements) are recovered. Other valid answers include: reducing streaming quality, using energy-efficient settings, buying refurbished devices, or supporting companies with strong environmental policies.

Q15: A social media company collects data about users aged 13–17. Describe three responsibilities they have under the DPA 2018.

Answer: (1) Lawful and transparent processing — The company must have a lawful basis for collecting young people’s data and must clearly explain what data is being collected and why, using language that young people can understand. (2) Data minimisation — The company should only collect data that is adequate, relevant, and necessary for the stated purpose. They should not collect excessive information about young users. (3) Security — The company must keep the data secure using appropriate technical and organisational measures, protecting it from unauthorised access, accidental loss, or data breaches. Additional valid points: the company must obtain verifiable parental consent for children under 13 (under the Age Appropriate Design Code), data must not be kept longer than necessary, and young people have the right to have their data deleted on request.

Exam Tips for Issues & Impact Questions

This topic appears in every GCSE Computer Science paper, and the questions often carry high marks. Here is how to maximise your marks:

  1. Always discuss both sides. Whether the question asks about surveillance, AI, environmental impact, or any other issue, the examiner wants to see a balanced argument. State the benefits, state the drawbacks, and then give your conclusion. Even if you feel strongly about one side, you must acknowledge the other perspective to earn full marks.
  2. Name specific laws. Do not just say “this is against the law.” State which law — the Data Protection Act 2018, the Computer Misuse Act 1990, or the Copyright, Designs and Patents Act 1988. This shows precise knowledge and earns more marks. If relevant, mention who enforces the law (e.g. the ICO enforces data protection).
  3. Use real-world examples. Mentioning cases like the Cambridge Analytica scandal, the British Airways data breach, or the TalkTalk hack demonstrates that you understand how these issues apply in practice. Even a brief reference to a real case makes your answer stand out.
  4. Identify stakeholders. When discussing the impact of a technology or policy, explain how different groups are affected. Consider users, businesses, governments, vulnerable groups, and the wider community. This shows the examiner that you can think beyond the obvious.
  5. Conclude with your own reasoned opinion. After presenting both sides, state what you believe and explain why. The examiner is looking for critical thinking, not just memorised facts. Phrases like “On balance, I believe…” or “Considering the evidence, the strongest argument is…” signal a mature, evaluative response.

Common command words in this topic:

Remember: In discussion questions, marks are awarded for the quality of your reasoning, not just the number of points you make. Three well-developed points with examples will score higher than six brief bullet points. Take time to explain why something matters, not just what it is.

Video Resources

These Craig 'n' Dave videos cover the ethical, legal, environmental, and emerging technology topics you need to know.

Past Paper Questions

Practise these exam-style questions. Click each question to reveal the mark scheme.

Explain what is meant by the 'digital divide' and give two examples. 4 marks

Mark scheme:

  • The digital divide is the gap between those who have access to technology and those who don't (1 mark)
  • Example 1: Economic — some people cannot afford computers/internet (1 mark)
  • Example 2: Geographic — rural areas may have poor broadband (1 mark)
  • Example 3: Generational — older people may struggle with technology (1 mark)
Describe two offences under the Computer Misuse Act 1990. 4 marks

Mark scheme:

  • Unauthorised access to a computer system (1 mark) — e.g. hacking into someone's account (1 mark)
  • Unauthorised modification of data (1 mark) — e.g. spreading a virus or deleting files (1 mark)
  • Unauthorised access with intent to commit further offences (1 mark) — e.g. accessing bank systems to steal money (1 mark)
Explain two ways that computers have a negative impact on the environment. 4 marks

Mark scheme:

  • Energy consumption: Data centres and devices use large amounts of electricity (1 mark), often from non-renewable sources contributing to climate change (1 mark)
  • E-waste: Discarded devices contain toxic materials (1 mark) that pollute the environment when not disposed of properly (1 mark)
Explain what is meant by 'algorithmic bias' and why it is a concern. 4 marks

Mark scheme:

  • Algorithmic bias is when AI systems produce unfair/prejudiced results (1 mark)
  • This happens because AI learns from existing data (1 mark)
  • If the training data contains human biases, the AI replicates them (1 mark)
  • This can lead to discrimination in areas like recruitment, lending, or criminal justice (1 mark)

Your Responsibilities as a Digital Citizen

You are part of the first generation to grow up entirely in the digital age. That comes with both incredible opportunities and real responsibilities. Being a responsible digital citizen is not just about following rules — it is about making thoughtful, informed choices about how you use technology and how you treat others online. Here is what being a responsible digital citizen looks like:

The technology industry needs people who can code, but it also desperately needs people who can think critically about the impact of what they create. The fact that you are studying both the technical and ethical sides of computing puts you in a strong position to shape technology for the better.

Remember: the code you write and the systems you build will be used by real people. Always ask yourself: Who benefits from this technology? Who might be harmed? Is this fair? How can I make it better?

As you continue your studies and eventually enter the workforce, you will face real decisions about these issues. Whether you are building a website, designing an app, training an AI model, or choosing which technologies to use, the knowledge you have gained in this topic will help you make responsible, informed choices. The best computer scientists are not just technically skilled — they are also ethical, thoughtful, and aware of the wider impact of their work.

Further Reading