CensorNet Web Filtering Policy and Approach for Education

Updated 2 months ago by admin

With over ten years’ experience in the education sector the CensorNet platform includes numerous predefined template policies, Rules and keyword dictionaries to simplify implementation and ensure rapid compliance with safeguarding guidelines.

Intelligent Web Filtering

Web Security provides filtering of over 500 categories of web content covering billions of web pages. The solution includes both the Counter Terrorism Internet Referral Unit (CTIRU) (Prevent) and Internet Watch Foundation (IWF) illegal sexual content lists.

Critically, Web Security provides page-level categorisation, rather than categorising at the domain or sub-domain level. Highly regarded, legitimate sites, such as www.bbc.co.uk or www.theguardian.com may still contain pages (stories) that would be considered inappropriate for children.

Advanced filtering enables educational establishments to comply with the UK Home Office Prevent strategy and the Counter Terrorism and Security Act 2015 and meet their duty to demonstrate “due regard to the need to prevent people from being drawn into terrorism”. Key features include:

  • Different policies which can be quickly and easily created for different ages or year groups
  • Integration with Microsoft® Active Directory for simplified user and group policies
  • Simple workflow for managing access to websites and responding to unblock requests (which can be actioned by teachers, faculty or year heads, not just IT staff)
  • Scheduled and Time Quota access to site categories, allowing for more relaxed lunchtime and after-school policies
  • Intelligent appropriate filtering, ensuring protection of pupils from harmful or inappropriate material without being overly restrictive and impacting learning
  • Specific categories covering anonymous browsing and proxy bypass sites

Web Security also includes the ability to analyse web pages in real time for keywords and phrases that are associated with discrimination, bullying, self-harm, violence, grooming, radicalization and extremism, with the convenience of pre-populated dictionaries. All dictionaries can be extended or customised if required.

Advertisements within web pages can also be removed to protect young people from distracting or inappropriate brand messages, as well as malvertising which increasingly affects mainstream sites that aren’t typically blocked.

Safe Search and YouTube for Schools

Safe search can be enforced on popular internet search engines such as Google, Yahoo! and Bing. Specific keywords or phrases can be blocked from being used in search strings. All searches are saved, to provide an audit trail of which pupils searched for what terms or topics on the internet.

The CensorNet solution also supports YouTube for Schools, turning the video-sharing service into a powerful, yet safe, educational resource.

Protection Inside and Outside the Classroom

CensorNet USS uses a combination of gateways and agents to protect students on and off the network, regardless of the device used. Furthermore, the Cloud Gateway features a captive or guest portal to ensure rules and policies are applied when students use their own personal devices to access web-based resources inside and outside the classroom, supporting BYOD initiatives.

Agents for Windows and Mac OS X protect devices regardless of where they are, even if users travel internationally.

Cloud Application Security – Going Beyond Block and Allow

Few websites today are entirely static. Most support a level of user interaction and are therefore applications, even if the site is a news site that simply allows users to comment on articles and stories.

Increasingly, organisations need to implement granular policies that manage user actions within web applications. Simply blocking or allowing sites is no longer a viable solution to balance student protection with the need to learn.

Cloud Application Security provides visibility of all user activity within web or cloud applications, driven by an App Catalog containing hundreds of applications and thousands of user actions. If a web application is allowed, specific features within the application can be monitored or blocked. Using simple Rules, sites can be made read-only.

Keywords can be used to ensure that the content of messages, posts and tweets is appropriate and not derogatory to the organisation, or staff. Files uploaded to cloud storage applications – such as Dropbox, or Microsoft® OneDrive – can be scanned for content, and malware.


How did we do?