Personal information, de-identification and sensitive information
Should there be a criminal offence for re-identifying de-identified information? What exceptions should apply?
Enter your response here
Yes
There should NOT be an exception based on organisation type, otherwise organisations types most likely to breach will lobby to be exempted. For example law practices or religious organisations will seek to be exempt and they should be properly accountable for example privacy information of victims of sexual abuse which are mishandled by law firms and by religious organisations.
At most, the standard exemptions only - such as routine exemptions for complying with a lawful order of a Court, enforcement of a law by a statutory authority (police) etc.
There should NOT be an exception based on organisation type, otherwise organisations types most likely to breach will lobby to be exempted. For example law practices or religious organisations will seek to be exempt and they should be properly accountable for example privacy information of victims of sexual abuse which are mishandled by law firms and by religious organisations.
At most, the standard exemptions only - such as routine exemptions for complying with a lawful order of a Court, enforcement of a law by a statutory authority (police) etc.
Should consent be required for the collection, use, disclosure and storage of other tracking data, such as health data, heart rate and sleeping schedule, in addition to precise geolocation tracking data?
Enter your response here
Yes to all.
Also 'consent' needs to be genuine - ie not providing consent should not result in denial of a service. This is presently used in 'consent' forms to coerce or force the person to provide 'consent'. It is not just occurring with commercial situations but even Australian government entities, for example Medical College forms such as Australian General Practice Training (AGPT) uses forms that require the applicant to 'consent' to the use of their data being used for advertising purposes or they cannot proceed with the application (to become a doctor / GP). This is probably not lawful consent which is concerning practice from a medical college that should know about consent. As well, they make non-essential personal fields mandatory, such as collecting demographic data (race, gender) which should not be mandatory, as these should be irrelevant to whether or not a person is selected for a training position as a GP. But the applicant cannot progress through the application without providing the data.
Also 'consent' needs to be genuine - ie not providing consent should not result in denial of a service. This is presently used in 'consent' forms to coerce or force the person to provide 'consent'. It is not just occurring with commercial situations but even Australian government entities, for example Medical College forms such as Australian General Practice Training (AGPT) uses forms that require the applicant to 'consent' to the use of their data being used for advertising purposes or they cannot proceed with the application (to become a doctor / GP). This is probably not lawful consent which is concerning practice from a medical college that should know about consent. As well, they make non-essential personal fields mandatory, such as collecting demographic data (race, gender) which should not be mandatory, as these should be irrelevant to whether or not a person is selected for a training position as a GP. But the applicant cannot progress through the application without providing the data.
Small business exemption
If you are a small business operator, what support from government would be helpful for you to understand and comply with new privacy obligations?
Please select all that apply
Checkbox:
Unticked
Information sessions
Checkbox:
Unticked
Written guidance
Checkbox:
Unticked
Digital modules
Checkbox:
Unticked
Self-assessment tools
Checkbox:
Unticked
Financial rebates or tax concessions for obtaining independent privacy advice
Checkbox:
Ticked
Other
Please expand on your response
No small business should be exempt from the Privacy Act 1988
Considerations of size and resources can be properly taken into account in the investigation stage of a breach by the regulator (eg OAIC) and in determining the appropriate remedy (education, ceassation of breach, changes to practices, financial penalty, compensation to victim/s, etc).
Privacy needs to be serious to be taken seriously and should be based on the nature of the information, not the nature of the entity handling the information.
The culture of 'data is gold' needs to be replaced with 'data is uranium' - potentially powerful and potentially toxic. Businesses and people working in business hanlding client data need to have a mindset and awareness they are custodians of someone else's sensitive information and act with respect. This is achieved by making all entities accountable with meaningful penalties.
For example all business handling sensitive - health, financial, legal, personal - data should be subject to the same conditions as 'businesses over $3million'. There are vast number of small business handling sensitive data for ordinary Australian mums and dads, including - law firms, accountants, real estate agents, etc. They are not currently subject to any obligations to protect their clients' information and they should be.
Please do not bow to pressure from Small Business Councils or lobby groups for small businesses. Small businesses already accept that what makes Australian economy and community strong is appropriate regulation across many areas of small business, this is no different and small business can cope with this regulation just as they cope with other regulation. Ultimately, it is well recognised that strong regulatory frameworks creates consumer confidence in the business which is GOOD for business. Alternatively low levels of consumer confidence will cause hesitation and decreased uptake of small business services. So strong regulation of privacy obligations by small businesses is good for those businesses, whether they initially recognise that or not.
Considerations of size and resources can be properly taken into account in the investigation stage of a breach by the regulator (eg OAIC) and in determining the appropriate remedy (education, ceassation of breach, changes to practices, financial penalty, compensation to victim/s, etc).
Privacy needs to be serious to be taken seriously and should be based on the nature of the information, not the nature of the entity handling the information.
The culture of 'data is gold' needs to be replaced with 'data is uranium' - potentially powerful and potentially toxic. Businesses and people working in business hanlding client data need to have a mindset and awareness they are custodians of someone else's sensitive information and act with respect. This is achieved by making all entities accountable with meaningful penalties.
For example all business handling sensitive - health, financial, legal, personal - data should be subject to the same conditions as 'businesses over $3million'. There are vast number of small business handling sensitive data for ordinary Australian mums and dads, including - law firms, accountants, real estate agents, etc. They are not currently subject to any obligations to protect their clients' information and they should be.
Please do not bow to pressure from Small Business Councils or lobby groups for small businesses. Small businesses already accept that what makes Australian economy and community strong is appropriate regulation across many areas of small business, this is no different and small business can cope with this regulation just as they cope with other regulation. Ultimately, it is well recognised that strong regulatory frameworks creates consumer confidence in the business which is GOOD for business. Alternatively low levels of consumer confidence will cause hesitation and decreased uptake of small business services. So strong regulation of privacy obligations by small businesses is good for those businesses, whether they initially recognise that or not.
Employee records exemption
How should employers provide enhanced transparency to employees about the purposes for which their personal and sensitive information is collected, used and disclosed?
Response
Employers should be required to disclose - up front - to employees:
- what information is obtained
- why
- where it is stored
- how long stored
- how it might be used and how it cannot be used
- who in the organisation has access to the data
- who the data is shared with outside the organisation
- provide to the employee (on demand from the employee but also automatically annually from the employer to the employee) a comlpete copy of the data held. This should be built into the process as a proactive obligation upon the employer to avoid the asymmetrical power imbalance between employer and employee from intimidating employees from exercising their data rights
- right to amend any inaccurate data
- right to comment on any data with obligation on employee that the comment must be stored with the data
- right to destroy data - eg if false, or on cessation of employment, or after certain time period
There should be bans on certain information being allowed to be requested, obtained or sought. Further discussion with stakeholders about this, but for example, to avoid collection of data that is inappropriate to the employment and may result in discrimination, for example: gender, race, religion, sexuality data; previous work cover (where not required to manage an injury); other health data not relevant to employment (eg abortions, surgeries, medical conditions).
Penalites upon employers for failure to comply and fines/compensation to victims of data breach by employers.
- what information is obtained
- why
- where it is stored
- how long stored
- how it might be used and how it cannot be used
- who in the organisation has access to the data
- who the data is shared with outside the organisation
- provide to the employee (on demand from the employee but also automatically annually from the employer to the employee) a comlpete copy of the data held. This should be built into the process as a proactive obligation upon the employer to avoid the asymmetrical power imbalance between employer and employee from intimidating employees from exercising their data rights
- right to amend any inaccurate data
- right to comment on any data with obligation on employee that the comment must be stored with the data
- right to destroy data - eg if false, or on cessation of employment, or after certain time period
There should be bans on certain information being allowed to be requested, obtained or sought. Further discussion with stakeholders about this, but for example, to avoid collection of data that is inappropriate to the employment and may result in discrimination, for example: gender, race, religion, sexuality data; previous work cover (where not required to manage an injury); other health data not relevant to employment (eg abortions, surgeries, medical conditions).
Penalites upon employers for failure to comply and fines/compensation to victims of data breach by employers.
Noting the current individual rights contained in Australian Privacy Principles 12 and 13, and the proposed individual rights in proposals 18.1, 18.2 and 18.3, what specific exceptions (if any) should apply to these rights in the employment context?
Enter your response here
The foundation principle should be recognition of the asymmetry of power inherent in relationship between employer and employee, so legislation should aim to empower the employee with regards:
- protection from inappropriate intrusion by employer (ie data not required for the job)
- inclusion in data collection and storage, ie being informed proactively
- access to data held about them
- right to remedy data errors
- storage safeguards and time frames
Employer's "rights" should be narrowly defined through the prism of only what is absolute necessity for employment - akin to the precedent regarding discrimination laws where lawful discrimination exists where the discrimination is essential to the work role. This is the same approach that should be taken with data. Employer right to request, collect, obtain from other sources, store, share, etc should be restricted to only that data that is "essential" (ie a higher threshold than simply "useful") to the employees work function, the employer fulfilling statutory obligations, etc. The onus and expense should be on the employer to prove their conduct has complied, and that data activities are "essential".
- protection from inappropriate intrusion by employer (ie data not required for the job)
- inclusion in data collection and storage, ie being informed proactively
- access to data held about them
- right to remedy data errors
- storage safeguards and time frames
Employer's "rights" should be narrowly defined through the prism of only what is absolute necessity for employment - akin to the precedent regarding discrimination laws where lawful discrimination exists where the discrimination is essential to the work role. This is the same approach that should be taken with data. Employer right to request, collect, obtain from other sources, store, share, etc should be restricted to only that data that is "essential" (ie a higher threshold than simply "useful") to the employees work function, the employer fulfilling statutory obligations, etc. The onus and expense should be on the employer to prove their conduct has complied, and that data activities are "essential".
If privacy protections for employees were introduced into workplace relations laws, what role should the privacy regulator have in relation to privacy complaints, enforcement of privacy obligations and development of privacy codes in the employment context?
Enter your response here
Privacy regulator (eg OAIC) to have same role as presently, ie to investigation and make determinations on the question "has a privacy breach occurred"? IE a finding of fact.
They could also make suggestions and orders regarding the penalty / remedy of the privacy breach.
If the privacy breach was also then itself a breach of Workplace relations duty of the employer investigation and enforcement of breach workplace relations laws/duties would be the role of the workplace relations regulator.
Legislation might like to make it clear that there is no 'double jeapordy' defence for the employer, for example that a penalty imposed by the OAIC does not prevent the Workplace Relations regulator also imposing a penalty for a breach which at its core is the same conduct as already penalised.
They could also make suggestions and orders regarding the penalty / remedy of the privacy breach.
If the privacy breach was also then itself a breach of Workplace relations duty of the employer investigation and enforcement of breach workplace relations laws/duties would be the role of the workplace relations regulator.
Legislation might like to make it clear that there is no 'double jeapordy' defence for the employer, for example that a penalty imposed by the OAIC does not prevent the Workplace Relations regulator also imposing a penalty for a breach which at its core is the same conduct as already penalised.
Journalism exemption
What additional support, if any, would be needed to assist smaller media organisations to comply with privacy obligations?
Enter your response here
Routine:
education
time period to adjust to implementation (eg 6 months)
Media organisations including small ones already cope with other regulation and legal frameworks such as defamation, laws prohibiting identification of victims of sexual crimes, etc. They can cope with these laws easily enough if given the information up front and enought time to adapt, eg 6 months.
The Report release and review and consultation time period would do most of the necessary education and preparation for them.
education
time period to adjust to implementation (eg 6 months)
Media organisations including small ones already cope with other regulation and legal frameworks such as defamation, laws prohibiting identification of victims of sexual crimes, etc. They can cope with these laws easily enough if given the information up front and enought time to adapt, eg 6 months.
The Report release and review and consultation time period would do most of the necessary education and preparation for them.
Additional protections
What additional requirements should apply to mitigate privacy risks relating to the development and use of facial recognition technology and other biometric information?
Enter your response here
All collection of identified biometric information should require direct consent.
There should be alternative access to a service if a user declines consent to biometric information.
Different contexts to consider:
- Government (eg passports, passing through a national border, Australian citizens or not citizens?)
- Employment - employer requirments for capturing employee data or using for access to work premises or operating work equipment
- Private / Commercial - a customer accessing a shop, using an online app or service, using a product
For example, an employer requesting face scan or fingerprints to access a work premises should allow an employee to decline and use swipe cards instead.
Collection should be only for the specified purpose. Specified purposes should be narrow not broad. For example, will a stadium operator be allowed to demand all entrants be face scanned / with driver's license to identify the face? Can the customer say no and still be allowed entry? Can the stadium operator then:
- give than data to law enforcement
- give the data to government agency for non-law enforcement purposes (eg advertising, demographic data collection, etc)
- sell the data to a data harvesting company
- use the data in advertising
- use the data in direct marketing
Can a Grocery Store (Coles, Woolworths) do the above?
Can a school (government school or private religious school) do the above, with children's data?
What about a bar or pub or gay nightlcub, can they do all of the above?
What a peaceful protest rally?
What about people participating in online work conferences / video conferences? Or online training courses and educational activities?
The emphasis in approaching all these scenarios should be on creating a system which:
- values the rights of the individual
- upholds the dignity and humanity of the individual
- assumes that people are law abiding until proven otherwise
- notes that intrusion by law enforcement should be triggered by the actions of a person (known in some jurisdiction as 'probable cause'), ie police should gather data in specific response to anti social actions of a person (such as rowdy behaviour in a stadium), not data harvest entire crowds of peaceful individuals acting benignly and store the data forever
- notes that it is dehumanising to treat people as 'data commodities' and to store infinite databases about every action, movement etc of people
- recognises the people, particularly children and young people, have a right to develop and change over time, including maturing their social, political, religious, philosophical beliefs and that data harvesting and permanent storage can deny this right to mature, or right to develop, by falsely recording and capturing beliefs or behaviours that may reflect the young person's immaturity but do not reflect the totality of the young persons' potential as a human if allowed to grow and mature
- note that it is in the best interest of humanity to allow humans the capacity to grow and change over time
- recogise that technology can pose risks to humanity that are greater than the risks of not having the technology, for example the conveniences and advancements of a certain application of technology is not necessarily exclusively benign, but may come at the cost of other elements of a cohesive and free society that we may value more than the convenience
There should be alternative access to a service if a user declines consent to biometric information.
Different contexts to consider:
- Government (eg passports, passing through a national border, Australian citizens or not citizens?)
- Employment - employer requirments for capturing employee data or using for access to work premises or operating work equipment
- Private / Commercial - a customer accessing a shop, using an online app or service, using a product
For example, an employer requesting face scan or fingerprints to access a work premises should allow an employee to decline and use swipe cards instead.
Collection should be only for the specified purpose. Specified purposes should be narrow not broad. For example, will a stadium operator be allowed to demand all entrants be face scanned / with driver's license to identify the face? Can the customer say no and still be allowed entry? Can the stadium operator then:
- give than data to law enforcement
- give the data to government agency for non-law enforcement purposes (eg advertising, demographic data collection, etc)
- sell the data to a data harvesting company
- use the data in advertising
- use the data in direct marketing
Can a Grocery Store (Coles, Woolworths) do the above?
Can a school (government school or private religious school) do the above, with children's data?
What about a bar or pub or gay nightlcub, can they do all of the above?
What a peaceful protest rally?
What about people participating in online work conferences / video conferences? Or online training courses and educational activities?
The emphasis in approaching all these scenarios should be on creating a system which:
- values the rights of the individual
- upholds the dignity and humanity of the individual
- assumes that people are law abiding until proven otherwise
- notes that intrusion by law enforcement should be triggered by the actions of a person (known in some jurisdiction as 'probable cause'), ie police should gather data in specific response to anti social actions of a person (such as rowdy behaviour in a stadium), not data harvest entire crowds of peaceful individuals acting benignly and store the data forever
- notes that it is dehumanising to treat people as 'data commodities' and to store infinite databases about every action, movement etc of people
- recognises the people, particularly children and young people, have a right to develop and change over time, including maturing their social, political, religious, philosophical beliefs and that data harvesting and permanent storage can deny this right to mature, or right to develop, by falsely recording and capturing beliefs or behaviours that may reflect the young person's immaturity but do not reflect the totality of the young persons' potential as a human if allowed to grow and mature
- note that it is in the best interest of humanity to allow humans the capacity to grow and change over time
- recogise that technology can pose risks to humanity that are greater than the risks of not having the technology, for example the conveniences and advancements of a certain application of technology is not necessarily exclusively benign, but may come at the cost of other elements of a cohesive and free society that we may value more than the convenience
Research
Should the scope of research permitted without consent be broadened? If so, what should the scope be?
Enter your response here
No.
The temptation for research and advancements, eg medical research using patient data en masse, should not outweigh the respect that all medical professionals and academics should have to the patients (humans) at the centre of their tasks.
Research without consent has led to many documented atrocities over the past 150 years, including in asylums, with bizarre physical treatments of psychiatric patients through to pharmaceutical experiments on Defence personnel, to radiation exposure experiments, to horrific medical experiments by Germans in in the 1930s and 1940s on vulnerable or marginalised populations through to surgical experiments in America in 1950s and 1960s (lobotomies, etc).
All of these experiments / research produced data. In all cases the 'greater good' that would result of the use of the data was justification to override the rights of the individuals.
What will start off as a tempting and well intentioned idea - lets use all the data of the millions of daily patient interactions to inform population wide research - will mission creep into use of identified or identifiable data without consent for commercial or profit making purposes by drug companies, product companies, etc. With profit driving the research it will not necessarily focus on good science but on things than can be profitable - 'treatments' are more profitable than 'cures', for example.
We have seen what happened with the NHS in UK where the head of digital records was someone motivated to 'commercialise' the data and similar attempts with previous government with 'My Health Record' in Australia being 'opt out' rather than 'opt in'. The focus was not improved patient care despite this being the marketing; the focus is on commercialising the data for profit. In doing this a honey pot for hackers and fraudsters is created.
At all times, patient consent should be required. Consent should be specific to the research -- not a 'blanket' consent which the patient (in distress and pain attending a hospital) is required to sign that their data can be used 'for any purpose'.
Consent should be a product of 'clever lawyers' finding ways to exploit people for their data, consent should respect the autonomy of the individual, no matter how frustrating for the researcher.
As a society we should focus on encouraging people to be more socially charitable and see research as a duty (as Scandinavian countries do, with high research consent rates), and consenting to data sharing should be seen as a civic duty, like blood donation or opt-in organ donation -- but ultimately it should be the patient's choice.
Humans should always own the right to their own DNA.
The giving of a sample (or the taking of a sample by an entity) should not transfer rights and ownership of DNA to any other party.
If medical research is to be conducted with DNA it should be with express consent and with acknowledgement that the individual retains ownership and intellectual property over their DNA.
A person should be able to give limits to consent - for example they may consent to their DNA and tissues being used to test drug treatments on but may not consent to it being used to develop a clone or clone embryo for testing and destruction.
A person's choice should not be judged, as it is their choice to make. It is okay for one person to consent to any and every use just as it is okay for the next person to give limited consent, and for the next person to not consent. That is a balanced and respectful society.
I personally would be inclined to give a broad consent - but that is my choice and should not obligate another person to make the same choice, nor to have the choice made for us by a government authority or a private company.
The temptation for research and advancements, eg medical research using patient data en masse, should not outweigh the respect that all medical professionals and academics should have to the patients (humans) at the centre of their tasks.
Research without consent has led to many documented atrocities over the past 150 years, including in asylums, with bizarre physical treatments of psychiatric patients through to pharmaceutical experiments on Defence personnel, to radiation exposure experiments, to horrific medical experiments by Germans in in the 1930s and 1940s on vulnerable or marginalised populations through to surgical experiments in America in 1950s and 1960s (lobotomies, etc).
All of these experiments / research produced data. In all cases the 'greater good' that would result of the use of the data was justification to override the rights of the individuals.
What will start off as a tempting and well intentioned idea - lets use all the data of the millions of daily patient interactions to inform population wide research - will mission creep into use of identified or identifiable data without consent for commercial or profit making purposes by drug companies, product companies, etc. With profit driving the research it will not necessarily focus on good science but on things than can be profitable - 'treatments' are more profitable than 'cures', for example.
We have seen what happened with the NHS in UK where the head of digital records was someone motivated to 'commercialise' the data and similar attempts with previous government with 'My Health Record' in Australia being 'opt out' rather than 'opt in'. The focus was not improved patient care despite this being the marketing; the focus is on commercialising the data for profit. In doing this a honey pot for hackers and fraudsters is created.
At all times, patient consent should be required. Consent should be specific to the research -- not a 'blanket' consent which the patient (in distress and pain attending a hospital) is required to sign that their data can be used 'for any purpose'.
Consent should be a product of 'clever lawyers' finding ways to exploit people for their data, consent should respect the autonomy of the individual, no matter how frustrating for the researcher.
As a society we should focus on encouraging people to be more socially charitable and see research as a duty (as Scandinavian countries do, with high research consent rates), and consenting to data sharing should be seen as a civic duty, like blood donation or opt-in organ donation -- but ultimately it should be the patient's choice.
Humans should always own the right to their own DNA.
The giving of a sample (or the taking of a sample by an entity) should not transfer rights and ownership of DNA to any other party.
If medical research is to be conducted with DNA it should be with express consent and with acknowledgement that the individual retains ownership and intellectual property over their DNA.
A person should be able to give limits to consent - for example they may consent to their DNA and tissues being used to test drug treatments on but may not consent to it being used to develop a clone or clone embryo for testing and destruction.
A person's choice should not be judged, as it is their choice to make. It is okay for one person to consent to any and every use just as it is okay for the next person to give limited consent, and for the next person to not consent. That is a balanced and respectful society.
I personally would be inclined to give a broad consent - but that is my choice and should not obligate another person to make the same choice, nor to have the choice made for us by a government authority or a private company.
Should there be a single exception for research without consent for both agencies and organisations? If not, what should be the difference in scope for agencies and organisations?
Enter your response here
There should be no exemption for research without consent for either agencies or organisations.
But, if the government is determined to facilitate widespread use of people's data without consent, which the question implies, then the exemptions should only be for agencies (government / statutory authorities) which hopefully are held to higher ethical standards than corporate entities, and are accountable to people through FOI, replacement of Ministers at elections, etc.
Corporate organisations are driven only by profit and will exploit any loophole and should be tightly regulated.
Do we want to create a society where people serve the organisations or where the organisation serve the people?
But, if the government is determined to facilitate widespread use of people's data without consent, which the question implies, then the exemptions should only be for agencies (government / statutory authorities) which hopefully are held to higher ethical standards than corporate entities, and are accountable to people through FOI, replacement of Ministers at elections, etc.
Corporate organisations are driven only by profit and will exploit any loophole and should be tightly regulated.
Do we want to create a society where people serve the organisations or where the organisation serve the people?
Which entity is the most appropriate body to develop guidelines to facilitate research without consent?
Enter your response here
There should be no research without consent.
ANAO should have oversight as any existing agency has a conflict of interest as they will be the entity doing the non-consensual researching and benefiting from it.
The entity should be transparent to the public and Senate oversight committees.
The consultation period should be lengthy and travel to all cities and regional areas to allow the people whose data you are planning to exploit to have their say about that.
ANAO should have oversight as any existing agency has a conflict of interest as they will be the entity doing the non-consensual researching and benefiting from it.
The entity should be transparent to the public and Senate oversight committees.
The consultation period should be lengthy and travel to all cities and regional areas to allow the people whose data you are planning to exploit to have their say about that.
People experiencing vulnerability
What privacy-related issues do APP entities face when seeking to safeguard individuals at risk of financial abuse?
Enter your response here
balancing privacy of the individual with concern for their welfare
balancing statutory privacy obligations with statutory duty of care obligations
May help to include legislated guidance as to how an APP entity is expected to balance competing obligations, eg:
- is one paramount to the other?
- they are expected to document the competing obligations to the client and seek instructions?
- they are expected to document the competing obligations to the regulator and seek guidance?
- do they have clear reporting obligations for at risk concerns (eg mandatory reporting) which already includes exemptions from privacy obligations, being a statutory obligation?
balancing statutory privacy obligations with statutory duty of care obligations
May help to include legislated guidance as to how an APP entity is expected to balance competing obligations, eg:
- is one paramount to the other?
- they are expected to document the competing obligations to the client and seek instructions?
- they are expected to document the competing obligations to the regulator and seek guidance?
- do they have clear reporting obligations for at risk concerns (eg mandatory reporting) which already includes exemptions from privacy obligations, being a statutory obligation?
How can financial institutions act in the interests of customers who may be experiencing financial abuse or may no longer have capacity to consent?
Enter your response here
Accountability and liaison
Do not assume sole responsibility - identify a responsible adult such as:
next of kin
guardian
regulator
so there is a competent adult who has legal duty to the interest of the vulnerable individual. The financial institution has too much of a conflict of interest.
Perhaps where there is doubt or suspicion of incapacity - to approach the regulator (OAIC) for guidance
Confimation of capacity would be an obvious step, eg assessment by a medical practitioner and formal documentation of the existence or lack of consent.
Do not assume sole responsibility - identify a responsible adult such as:
next of kin
guardian
regulator
so there is a competent adult who has legal duty to the interest of the vulnerable individual. The financial institution has too much of a conflict of interest.
Perhaps where there is doubt or suspicion of incapacity - to approach the regulator (OAIC) for guidance
Confimation of capacity would be an obvious step, eg assessment by a medical practitioner and formal documentation of the existence or lack of consent.
Should the permitted general situations in the Privacy Act be amended to enable disclosure of personal information in safeguarding situations which may not meet the requirements under section 16A, item 1? What other options for reform could be considered to protect people where abuse is suspected while respecting an individual's privacy and personal autonomy?
Enter your response here
assumption of autonomy of the individual until proven otherwise
provision of information (support services, where to seek help and report) to the suspected at risk person to allow them to access services
remembering that patronising a person 'for their own good' should not outweigh the person's individual agency even if they are in difficult personal situation.
help should focus on empowering and educating the person to remove from the at risk situaton and seek their consent to do so
taking steps without their consent risks being just more financial abuse (after the very at risk situaton being suspected is one in which a party is acting without the person's consent....)
provision of information (support services, where to seek help and report) to the suspected at risk person to allow them to access services
remembering that patronising a person 'for their own good' should not outweigh the person's individual agency even if they are in difficult personal situation.
help should focus on empowering and educating the person to remove from the at risk situaton and seek their consent to do so
taking steps without their consent risks being just more financial abuse (after the very at risk situaton being suspected is one in which a party is acting without the person's consent....)
Individual rights
What would the impact of the proposed individual rights be on individuals, businesses and government?
Enter your response here
Is portability and CDR really an individual right, or is it a corporate right (of banks to share data without customer consent) dressed up as an individual right? Does it require the customer to consent and activate the right? Does the customer have a right to say no? If the entity activitating the right is a bank and not the customer then it is not an 'individual right' it is the bank's right.
Individuals should have established rights to access and explanation. Safeguards should ensure explanations which are meaningful and not superficial or tokenistic. Ultimately dispute to be decided by the regulator (OAIC).
Caution regarding allowing entity to charge a fee. This is barrier to ordinary mums and dads who need help addressing the asymmetry of power with corporations.
For example a first request should be free. Or one request in any 12 month period should be free.
If fees are to be chargable the legislation should be make it a nominal amount, ie even if it will not cover the organisation's costs. Organisations will use cumbersome expensive processes, including lawyers, to drive their fees up out of affordability of ordinary peopl. This will result in data breaches remaining undetected and unaddressed which is not in the community interest.
There should be "no fee" groups such as people on a Disability Support Pension, or any form of pension.
Also certain entities and certain types of information should not be allowed to charge a fee; for example entities over a certain size.
Protection from data collection and data storage over-reach even when done for seemingly 'good' reasons:
The Australian Federal Police submission is noted in which they state "ii is difficult for APP entities to know what might be required for law enforcement ahead of time".
While this is true, is has always been true, it is not some new truth of the digital age.
The remedy is not to collect every piece of data on every Australian and store it until their death "in case" it may become useful. No, the soluton is that investigations be conducted in a timely and efficient manner so that necessary data can be sourced contemporaneous to the investigation and prior to routine data destruction or loss.
This is how investigations were conducted before the digital age.
While sympathetic to burdens on investigation and evidence collection, the greedy purposeless and undirected harvesting of data (which should be called 'junk data' or 'white noise') does need to be balanced and moderated. Otherwise taking the AFP logic to its extension ALL data of every kind about every person would be stored and sent to a police database just on the off chance that it may at some point in 20 years be useful to know which granny purchased a litre of milk at Woolworths at 2pm on a Thursday?
The reality is policing has managed without such extraordinary reliance on the digital capture of the minutae of ordinary Australian's lives and they can continue to do so.
The privacy laws should respect this and uphold this as our valued way of life.
Do Australian's want to be digital prisoners under constant surveillance? Or do we want a measure of freedom, accepting the risk that accompanies that freedom?
The AFP already accept that not all data can be or should be collected about everyone all the time, so the question then is where to strike the right balance.
Right now, data can be obtained using warrants. Data collection as evidence collection should be targeted, investigation driven, ie seeking specific data for a specific purpose, not fishing expeditions of the entire, law abiding, population.
Individuals should have established rights to access and explanation. Safeguards should ensure explanations which are meaningful and not superficial or tokenistic. Ultimately dispute to be decided by the regulator (OAIC).
Caution regarding allowing entity to charge a fee. This is barrier to ordinary mums and dads who need help addressing the asymmetry of power with corporations.
For example a first request should be free. Or one request in any 12 month period should be free.
If fees are to be chargable the legislation should be make it a nominal amount, ie even if it will not cover the organisation's costs. Organisations will use cumbersome expensive processes, including lawyers, to drive their fees up out of affordability of ordinary peopl. This will result in data breaches remaining undetected and unaddressed which is not in the community interest.
There should be "no fee" groups such as people on a Disability Support Pension, or any form of pension.
Also certain entities and certain types of information should not be allowed to charge a fee; for example entities over a certain size.
Protection from data collection and data storage over-reach even when done for seemingly 'good' reasons:
The Australian Federal Police submission is noted in which they state "ii is difficult for APP entities to know what might be required for law enforcement ahead of time".
While this is true, is has always been true, it is not some new truth of the digital age.
The remedy is not to collect every piece of data on every Australian and store it until their death "in case" it may become useful. No, the soluton is that investigations be conducted in a timely and efficient manner so that necessary data can be sourced contemporaneous to the investigation and prior to routine data destruction or loss.
This is how investigations were conducted before the digital age.
While sympathetic to burdens on investigation and evidence collection, the greedy purposeless and undirected harvesting of data (which should be called 'junk data' or 'white noise') does need to be balanced and moderated. Otherwise taking the AFP logic to its extension ALL data of every kind about every person would be stored and sent to a police database just on the off chance that it may at some point in 20 years be useful to know which granny purchased a litre of milk at Woolworths at 2pm on a Thursday?
The reality is policing has managed without such extraordinary reliance on the digital capture of the minutae of ordinary Australian's lives and they can continue to do so.
The privacy laws should respect this and uphold this as our valued way of life.
Do Australian's want to be digital prisoners under constant surveillance? Or do we want a measure of freedom, accepting the risk that accompanies that freedom?
The AFP already accept that not all data can be or should be collected about everyone all the time, so the question then is where to strike the right balance.
Right now, data can be obtained using warrants. Data collection as evidence collection should be targeted, investigation driven, ie seeking specific data for a specific purpose, not fishing expeditions of the entire, law abiding, population.
Are further exceptions required for any of the proposed individual rights?
Enter your response here
the exceptions at 18.6 appear reasonable and to strike the right balance
the focus should be on ordinary law abiding citizens should enjoy the right to delete or correct erroneous or defamatory or harmful data in the age of 'permanent online digital records'
balancing with the community protection that convicted offenders seeking to conceal themselves not access those 'rights to be forgotten' where it would be contrary to the public interest
the focus should be on ordinary law abiding citizens should enjoy the right to delete or correct erroneous or defamatory or harmful data in the age of 'permanent online digital records'
balancing with the community protection that convicted offenders seeking to conceal themselves not access those 'rights to be forgotten' where it would be contrary to the public interest
Automated decision-making
What types of decisions are likely to have a legal or similarly significant effect on an individual's rights?
Enter your response here
Any automated decions that results in approving or disapproving access to a service or entitlement. EG pension, etc. EG Robodebt....
Any automated decisions about a person's financial obligations to a corporation or the government - eg the ATO using automated decision making with tax payer accounts
There has been talk of using automated decision making in Courts with sentencing - think Robodebt locking people up in jail .... what could possibly go wrong?
Law firms already use automated decision making to enter client details and it produces case documents, letters, and recommended outcomes (such as dispute settlement outcomes) with barely a human involved (notably this has not been accompanied by a decrease in the fees the law firm bills to the client ....)
As digital photo recognition tech becomes more prevalent, including vehicles and people the use of automated decisions will result in innocent people being labelled as responsible for criminal acts they did not commit due to errors in the systems. This is already happening with vehicle registration plate spoofing, has hapenned with IP address spoofing or ghosting and can and will happen with simple facial recognition match errors.
Medical decision making is increasingly going to be augmented, including diagnosis and prescription of pharmaceuticals, potentially even selection for surgery.
This may result in people being denies surgery they need, or being subjected to surgery they do not need. Incorrect prescribing, etc.
Pricing is being augmented with automated decision making including insurance premiums and even ordinary items, for example variable pricing being offered based on location (or more accurately, location if IP address), gender, past buying habits, race, etc
Automated decision making is being used in delivering information and news to consumers - with potential damaging impact of creating 'echo chambers' that warp people's perceptions of the facts around them, reducing diversification of information sources. This is deleterious to healthy cohesive society.
Steps to limit risk of adverse consequences from automated decision making include:
- All automated decision making should be subject to human oversight and intervention.
- All persons should be advised up front when automated decision making is used in their matter or service. Potentially the option to opt-out and request human management would be a good idea up front.
- Any time a person raises a question, concern or challenge this must be an automatic and instant trigger to step away from the automated decision making in that person's matter and replace it with human management. A bit like various laws around the country (eg 'Mason's Law in Queensland) which says that any time a patient or parent of a child patient has a concern about the health care they can trigger Mason's Law which means senior hospital administrator MUST step in and oversight the care. it may be a mild inconvenience and cost for the hospital but it saves lives. Had the Dept Human Services adopted this approach the Robodebt scandal may have been averted or minimised?
Any automated decisions about a person's financial obligations to a corporation or the government - eg the ATO using automated decision making with tax payer accounts
There has been talk of using automated decision making in Courts with sentencing - think Robodebt locking people up in jail .... what could possibly go wrong?
Law firms already use automated decision making to enter client details and it produces case documents, letters, and recommended outcomes (such as dispute settlement outcomes) with barely a human involved (notably this has not been accompanied by a decrease in the fees the law firm bills to the client ....)
As digital photo recognition tech becomes more prevalent, including vehicles and people the use of automated decisions will result in innocent people being labelled as responsible for criminal acts they did not commit due to errors in the systems. This is already happening with vehicle registration plate spoofing, has hapenned with IP address spoofing or ghosting and can and will happen with simple facial recognition match errors.
Medical decision making is increasingly going to be augmented, including diagnosis and prescription of pharmaceuticals, potentially even selection for surgery.
This may result in people being denies surgery they need, or being subjected to surgery they do not need. Incorrect prescribing, etc.
Pricing is being augmented with automated decision making including insurance premiums and even ordinary items, for example variable pricing being offered based on location (or more accurately, location if IP address), gender, past buying habits, race, etc
Automated decision making is being used in delivering information and news to consumers - with potential damaging impact of creating 'echo chambers' that warp people's perceptions of the facts around them, reducing diversification of information sources. This is deleterious to healthy cohesive society.
Steps to limit risk of adverse consequences from automated decision making include:
- All automated decision making should be subject to human oversight and intervention.
- All persons should be advised up front when automated decision making is used in their matter or service. Potentially the option to opt-out and request human management would be a good idea up front.
- Any time a person raises a question, concern or challenge this must be an automatic and instant trigger to step away from the automated decision making in that person's matter and replace it with human management. A bit like various laws around the country (eg 'Mason's Law in Queensland) which says that any time a patient or parent of a child patient has a concern about the health care they can trigger Mason's Law which means senior hospital administrator MUST step in and oversight the care. it may be a mild inconvenience and cost for the hospital but it saves lives. Had the Dept Human Services adopted this approach the Robodebt scandal may have been averted or minimised?
Should there be exceptions to a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made?
Please select one item
Radio button:
Unticked
Yes
Radio button:
Ticked
No
Radio button:
Unticked
Unsure
Please provide examples of what these exceptions should be
If the information effects them they should be able to access it.
Ultimately this is in the best interest of the system, not just the individual, even if systems respond by not wanting to release the information. The reason it is in the best interest of systems, is that by releasing the requested information to the individual (who evidently has a grievance which is why they are requesting it) the information, assumptions, methodology etc will then be reviewed by the individual who will then provide feedback on where it is deficient - this allows the system to incorporate that feedback and make improvements to the system.
That is a healthy system that improves. The alternative is secrecy and false assumptions and algorithms will make adverse decisions to people's lives and never be able to be scrutinised, tested and improved.
The concept should be "nothing about me, without me" - in other words if it is okay for a digital system to apply an algorithm to a human then it should be okay for the human to examine the algorithm.
Even in cases where a corporation is concerned about 'intellectual property' or proprietary rights of their digital system or algorithm, the right of the individual whose life is being shaped and impacted by that system should be paramount.
It may be that certain provisions such as non-disclosure can be applied to strike balance between the individual's right to examine a system that is making decisions in their life versus the proprietary interest of a corporation.
Even with law enforcement - already and before the digital age, a person charged with a crime could legally access law enforcement documents such as: a seach warrant application, surveillance reports and log sheets, and other data some of which may have revealed law enforcement methodology, much to the frustration of law enforcement.
But in terms of the use of automated systems making decisions about people being used by corporations and by government (ATO, issueing traffic infringements, interventions based on use of digital AI or surveillance, etc) to ensure the integrity of the systems being used and to prevent accidental miscarriages of justice these systems must always be available to be examined by the person who is subject to the system.
This is consistent with the legal principle that a person has a right to face their accuser - in this case the 'accuser' is a digital AI platform.
The same applies to corporate systems - if one customer is priced twice the price of the customer next to them, the customer should have the right to question that and examine the digital pricing algorithm. It would not be accepted if a shop charged a customer twice the price based on race, gender, sexuality, postcode, etc, to why should we tolerate a digital automated system doing that to us?
Ultimately this is in the best interest of the system, not just the individual, even if systems respond by not wanting to release the information. The reason it is in the best interest of systems, is that by releasing the requested information to the individual (who evidently has a grievance which is why they are requesting it) the information, assumptions, methodology etc will then be reviewed by the individual who will then provide feedback on where it is deficient - this allows the system to incorporate that feedback and make improvements to the system.
That is a healthy system that improves. The alternative is secrecy and false assumptions and algorithms will make adverse decisions to people's lives and never be able to be scrutinised, tested and improved.
The concept should be "nothing about me, without me" - in other words if it is okay for a digital system to apply an algorithm to a human then it should be okay for the human to examine the algorithm.
Even in cases where a corporation is concerned about 'intellectual property' or proprietary rights of their digital system or algorithm, the right of the individual whose life is being shaped and impacted by that system should be paramount.
It may be that certain provisions such as non-disclosure can be applied to strike balance between the individual's right to examine a system that is making decisions in their life versus the proprietary interest of a corporation.
Even with law enforcement - already and before the digital age, a person charged with a crime could legally access law enforcement documents such as: a seach warrant application, surveillance reports and log sheets, and other data some of which may have revealed law enforcement methodology, much to the frustration of law enforcement.
But in terms of the use of automated systems making decisions about people being used by corporations and by government (ATO, issueing traffic infringements, interventions based on use of digital AI or surveillance, etc) to ensure the integrity of the systems being used and to prevent accidental miscarriages of justice these systems must always be available to be examined by the person who is subject to the system.
This is consistent with the legal principle that a person has a right to face their accuser - in this case the 'accuser' is a digital AI platform.
The same applies to corporate systems - if one customer is priced twice the price of the customer next to them, the customer should have the right to question that and examine the digital pricing algorithm. It would not be accepted if a shop charged a customer twice the price based on race, gender, sexuality, postcode, etc, to why should we tolerate a digital automated system doing that to us?
Direct marketing, targeting and trading
What would be the impact of the proposals in relation to direct marketing on individuals, businesses and government?
Enter your response here
The legislation should protect the rights of individuals to be free from direct marketing as superior to the 'right' of a corporaton to target with advertising.
Also some harms can be caused by the digital approaches to direct marketing, for example a young adolescent girl who searches online for information about sexual health, pregancy, STIs, etc, can then be bombarded with associated advertising which may expose them to harm from family members.
It is cruel that a woman who has searched for pregnacy test kits, then suffered a miscarriage, should be bombarded with advertising for bibs, cots, prams, and spammed photos of babies.
Should a vulnerable at-risk woman who searches online for 'abortion' have her data history purchased by a political party or a religious activist entity and then targeted with right-wing conservative literature or religious 'advertising'?
Businesses may be required to survive with the same less direct marketing that they have used for 100 years but that is a small inconvenience for preserving the dignity of humans. Also the corporations seem to have made plenty of money these past 50 years since television with their 'indirect' marketing. McDonalds, Nike, etc all seem to be doing okay.
Also some harms can be caused by the digital approaches to direct marketing, for example a young adolescent girl who searches online for information about sexual health, pregancy, STIs, etc, can then be bombarded with associated advertising which may expose them to harm from family members.
It is cruel that a woman who has searched for pregnacy test kits, then suffered a miscarriage, should be bombarded with advertising for bibs, cots, prams, and spammed photos of babies.
Should a vulnerable at-risk woman who searches online for 'abortion' have her data history purchased by a political party or a religious activist entity and then targeted with right-wing conservative literature or religious 'advertising'?
Businesses may be required to survive with the same less direct marketing that they have used for 100 years but that is a small inconvenience for preserving the dignity of humans. Also the corporations seem to have made plenty of money these past 50 years since television with their 'indirect' marketing. McDonalds, Nike, etc all seem to be doing okay.
What would be the impact of the proposals in relation to targeting on individuals, businesses and government?
Enter your response here
see answer to direct marketing.
The legislation should protect the rights of individuals to be free from direct marketing as superior to the 'right' of a corporaton to target with advertising.
Also some harms can be caused by the digital approaches to direct marketing, for example a young adolescent girl who searches online for information about sexual health, pregancy, STIs, etc, can then be bombarded with associated advertising which may expose them to harm from family members.
It is cruel that a woman who has searched for pregnacy test kits, then suffered a miscarriage, should be bombarded with advertising for bibs, cots, prams, and spammed photos of babies.
Should a vulnerable at-risk woman who searches online for 'abortion' have her data history purchased by a political party or a religious activist entity and then targeted with right-wing conservative literature or religious 'advertising'?
Businesses may be required to survive with the same less direct marketing that they have used for 100 years but that is a small inconvenience for preserving the dignity of humans. Also the corporations seem to have made plenty of money these past 50 years since television with their 'indirect' marketing. McDonalds, Nike, etc all seem to be doing okay.
The legislation should protect the rights of individuals to be free from direct marketing as superior to the 'right' of a corporaton to target with advertising.
Also some harms can be caused by the digital approaches to direct marketing, for example a young adolescent girl who searches online for information about sexual health, pregancy, STIs, etc, can then be bombarded with associated advertising which may expose them to harm from family members.
It is cruel that a woman who has searched for pregnacy test kits, then suffered a miscarriage, should be bombarded with advertising for bibs, cots, prams, and spammed photos of babies.
Should a vulnerable at-risk woman who searches online for 'abortion' have her data history purchased by a political party or a religious activist entity and then targeted with right-wing conservative literature or religious 'advertising'?
Businesses may be required to survive with the same less direct marketing that they have used for 100 years but that is a small inconvenience for preserving the dignity of humans. Also the corporations seem to have made plenty of money these past 50 years since television with their 'indirect' marketing. McDonalds, Nike, etc all seem to be doing okay.
What would be the impact of the proposals in relation to sale of personal information on individuals, businesses and government?
Enter your response here
Hopefully sale of personal information ceases or significantly reduces and if it does this is GOOD thing for individuals and society.
Sale of data should be outlawed other than by specific consent (ie each sale requiring its specific consent) of the subject of the data. In the case of consented sale, the subject of the sale should be paid a percentage.
Companies will survive financially. Companies have made profits for a hundred years without buying and selling private information of customers.
Sale of data should be outlawed other than by specific consent (ie each sale requiring its specific consent) of the subject of the data. In the case of consented sale, the subject of the sale should be paid a percentage.
Companies will survive financially. Companies have made profits for a hundred years without buying and selling private information of customers.
Are there any technical or other challenges you would face in providing information about how your algorithms target users to provide them with online content or recommendations?
Enter your response here
No. if an algorithm can be built it can be communicated to a user or client or subject and it can be shared.
The only resistance is a cultural / ideological resistance from corporations who want to profit above all considerations including at the expense of customers and individuals. They can and will get over that and will comply with regulatory requirements, and keep making profits, which is as it has always been.
The only resistance is a cultural / ideological resistance from corporations who want to profit above all considerations including at the expense of customers and individuals. They can and will get over that and will comply with regulatory requirements, and keep making profits, which is as it has always been.
Please share any examples of situations where greater transparency about how individuals are being targeted by recommender algorithms is not necessary or important to individual or societal wellbeing.
Enter your response here
this should be a blank field as there are no such examples
the transparency is important to individual and societal wellbeing.
the transparency is important to individual and societal wellbeing.
Security and destruction
What baseline privacy outcomes should be included in APP 11?
Enter your response here
The danger of listing a base line is that it becomes overly prescriptive, not allowing scope for additional measures that should be considered 'reasonable', including with new technology developments and risks; also that APP entities will 'work to rules' and only do the bare minimum or will legalistically use the base line to argue their measures comply with base line, even where the result is ineffective outcome, they can use it as a defence.
Rather, the approach of 'reasonable' with the question of what is reasonable to be determined on the merits (by investigation and ruling by the regulator, OAIC) is current approach and has benefits.
If a prescriptive base line is to be used, it should be VERY HIGH bar, not a a low bar, and also should incorporate a general, broad catch-all such as
"or such other step as would be reasonable in the circumstances, or as determined by the Privacy Commissioner"
to maintain the benefits of the current approach and allow flexibility to apply to future circumstances.
The submission to the review from the OAIC says it all:
"The OAIC noted that it did not consider that greater prescription would provide any additional certainty to APP entities." There is your answer.
Rather, the approach of 'reasonable' with the question of what is reasonable to be determined on the merits (by investigation and ruling by the regulator, OAIC) is current approach and has benefits.
If a prescriptive base line is to be used, it should be VERY HIGH bar, not a a low bar, and also should incorporate a general, broad catch-all such as
"or such other step as would be reasonable in the circumstances, or as determined by the Privacy Commissioner"
to maintain the benefits of the current approach and allow flexibility to apply to future circumstances.
The submission to the review from the OAIC says it all:
"The OAIC noted that it did not consider that greater prescription would provide any additional certainty to APP entities." There is your answer.
What are the barriers APP entities face to minimise collection and retention of identity credential information (e.g. reference numbers from, or copies of, drivers’ licences and passports)?
Enter your response here
In addition to genuine need to collect and store this data, lets be honest and acknowledge further and unnecessary risk is created by the following mindset and conduct of APP entities:
laziness - regarding privacy as a 'cost' not an investment or duty
greed - hoovering up every little bit of data even if they don't know why
disorganised - systems are often ad hoc, out of date, patched upon patched
Removing those factors is the low hanging fruit to reduce data collection and storage to only that which is actually necessary.
Currently a barrier is statutory requirements - Government requires certain ID checks and data collection (proving identity, opening accounts, etc). Government compounds this by certain requirements for retaining data for a number of years (eg medical data for 7 years), Legal Professional Act and Regulations, trust data for 7 years, Metadata retention etc.
Insurance companies then have arbitrary retention periods, eg Law Society providing indemnity to lawyers require retention of client data for 7 years (the legislation only requires trust records but legal insurers require the entire client file, which is excessive).
So Government plays a role by ceasing or reducing its statutory requirements, for example - require that ID be proven at commencement (opening an account or contract) but that the data MUST be destroyed after 30 days. If later there is some legal concern about the identity of the account holder, the account holder can simply be requested to re-produce their ID documents. It is not rocket science. The docs do not even need to be kept for 30 days. They simply need to be sighted by the employee opening the account who ticks a box the ID document has been sighted. If there is concern about fraud that is what internal investigations and police investigations are for.
As it is, the concern about 'potential' and 'occassional' fraud such as presenting false ID documents or collusion with the employee ticking the form, has resulted in a system of obsessive ID document collection and storage and 10 million ACTUAL fraudulent thefts of those documents by criminals.
So a system created out of fear of small scale crime has resulted in large scale crime.Redacted text
Also, professional regulatory bodies (legal, medical, etc) and insurers need to be guided by government to voluntarily reduce their archaic retention time periods that are obsessive and causing dangerous risk - measures that arose out of risk reduction, ie retaining files against risk of potential future litigation, in the age of paper archive storage, now actually create much larger and more probable risk of that data stored digitally being hacked or stolen.
The number of cases that actual result in litigation is miniscule (as a percentage of patient or client files stored) whereas the risk is that 100% of those files might be compromised in a data breach event and the financial cost of that would be far greater than the combined financial cost of the tiny percentage of cases that produce litigation for which an archived file may have been thought to be handy. Add to that the reality that merely having an archived file retained is not necessarily helpful for litigation - it may contain data prejudicial to the retainer's position. Also there exist legal protections against litigation such as stay of proceedings due to unavailability of documents.
So the out dated culture of storing records forever out of some nebulous fear or potential future litigation has to be reset - by industry, professional bodies, insurers and the regulator and government - so that all entities prefer minimalist approach to data collection and retention.
Noting also that taking a minimalist approach also disincentives bad actors from attempting to steal the data, and makes it easier to retain a breach event, and makes it easier to track data breach perpetrators ....
laziness - regarding privacy as a 'cost' not an investment or duty
greed - hoovering up every little bit of data even if they don't know why
disorganised - systems are often ad hoc, out of date, patched upon patched
Removing those factors is the low hanging fruit to reduce data collection and storage to only that which is actually necessary.
Currently a barrier is statutory requirements - Government requires certain ID checks and data collection (proving identity, opening accounts, etc). Government compounds this by certain requirements for retaining data for a number of years (eg medical data for 7 years), Legal Professional Act and Regulations, trust data for 7 years, Metadata retention etc.
Insurance companies then have arbitrary retention periods, eg Law Society providing indemnity to lawyers require retention of client data for 7 years (the legislation only requires trust records but legal insurers require the entire client file, which is excessive).
So Government plays a role by ceasing or reducing its statutory requirements, for example - require that ID be proven at commencement (opening an account or contract) but that the data MUST be destroyed after 30 days. If later there is some legal concern about the identity of the account holder, the account holder can simply be requested to re-produce their ID documents. It is not rocket science. The docs do not even need to be kept for 30 days. They simply need to be sighted by the employee opening the account who ticks a box the ID document has been sighted. If there is concern about fraud that is what internal investigations and police investigations are for.
As it is, the concern about 'potential' and 'occassional' fraud such as presenting false ID documents or collusion with the employee ticking the form, has resulted in a system of obsessive ID document collection and storage and 10 million ACTUAL fraudulent thefts of those documents by criminals.
So a system created out of fear of small scale crime has resulted in large scale crime.
Also, professional regulatory bodies (legal, medical, etc) and insurers need to be guided by government to voluntarily reduce their archaic retention time periods that are obsessive and causing dangerous risk - measures that arose out of risk reduction, ie retaining files against risk of potential future litigation, in the age of paper archive storage, now actually create much larger and more probable risk of that data stored digitally being hacked or stolen.
The number of cases that actual result in litigation is miniscule (as a percentage of patient or client files stored) whereas the risk is that 100% of those files might be compromised in a data breach event and the financial cost of that would be far greater than the combined financial cost of the tiny percentage of cases that produce litigation for which an archived file may have been thought to be handy. Add to that the reality that merely having an archived file retained is not necessarily helpful for litigation - it may contain data prejudicial to the retainer's position. Also there exist legal protections against litigation such as stay of proceedings due to unavailability of documents.
So the out dated culture of storing records forever out of some nebulous fear or potential future litigation has to be reset - by industry, professional bodies, insurers and the regulator and government - so that all entities prefer minimalist approach to data collection and retention.
Noting also that taking a minimalist approach also disincentives bad actors from attempting to steal the data, and makes it easier to retain a breach event, and makes it easier to track data breach perpetrators ....
Controllers and processors
If small business non-APP entities that process information on behalf of APP entities are brought into the scope of the Act for their handling of personal information on behalf of the APP entity controller, what support should be provided to small businesses to assist them to comply with the obligations on processors?
Enter your response here
From government:
- education
- time to adapt (the time for the review should already be enough)
Given the the entity is seeking to profit by providing services to corporations the expectation and onus on them is to be aware of their regulatory provisions and to comply.
Also given that some large corporates will be creating these entities they onus should be on them to comply and ensure they set up these small business entities in compliance.
Also there should be NON-DELEGABLE DUTY, ie the large corporation might delegate the functions to the small entity but they should not be allowed to delegate their duty.
This is to prevent avoidance where the big corporation divests its task to smaller entity, takes no care or responsibility about how the smaller entity operates, then inevitably the smaller entity breaches data of persons, but it has no assets to meet its liability and folds overnight only for a phoenix $2 shelf company to pop up the next day and take its place.
To avoid this, government should apply the lessons applied to religious organisations in relation to liability for child abuse and ensure that corporations' privacy obligations are NON-DELEGABLE duties.
- education
- time to adapt (the time for the review should already be enough)
Given the the entity is seeking to profit by providing services to corporations the expectation and onus on them is to be aware of their regulatory provisions and to comply.
Also given that some large corporates will be creating these entities they onus should be on them to comply and ensure they set up these small business entities in compliance.
Also there should be NON-DELEGABLE DUTY, ie the large corporation might delegate the functions to the small entity but they should not be allowed to delegate their duty.
This is to prevent avoidance where the big corporation divests its task to smaller entity, takes no care or responsibility about how the smaller entity operates, then inevitably the smaller entity breaches data of persons, but it has no assets to meet its liability and folds overnight only for a phoenix $2 shelf company to pop up the next day and take its place.
To avoid this, government should apply the lessons applied to religious organisations in relation to liability for child abuse and ensure that corporations' privacy obligations are NON-DELEGABLE duties.
Overseas data flows
Should the extraterritorial scope of the Act be amended to introduce an additional requirement to demonstrate an 'Australian link' that is focused on personal information being connected with Australia?
Enter your response here
No
All data of Australians should be covered. The Australian link is the Australian citizen to whom the privacy information relates.
The Privacy Enforcement Bill was correct to remove the 'australian link' requirement. It is gone, it should stay gone.
All data of Australians should be covered. The Australian link is the Australian citizen to whom the privacy information relates.
The Privacy Enforcement Bill was correct to remove the 'australian link' requirement. It is gone, it should stay gone.
Should disclosures of personal information to overseas recipients via the publication of personal information online be subject to an exception from the requirements of APP 8.1 where it is in the public interest? How should such an exception be framed to ensure the public interest in protecting individuals’ privacy is appropriately balanced with other public interests?
Enter your response here
exceptions should NOT be provided.
this would only encourage entities to structure their operations to weakest link in the regulatory chain, which is the exceptions. The regulatory requirements should be consistently applied across the board including in relation to disclosure of personal information to overseas recipients, so as for there to no incentive for entities to lower their standards of conduct. Entities should know and assume they are bound by the highest applicable requirement at all times.
The goal should be for all jurisdictions to have high standards so that most traffic is occurring in jurisdictions with similarly high requirements, eg traffic between Australia and USA, UK or Europe, etc and where traffic is not to high regulated jurisdictions, the Australian regulation standards applies. Exceptions or lower standards for overseas traffic will encourage entities to traffic data through or to countries with little or no regulation.
this would only encourage entities to structure their operations to weakest link in the regulatory chain, which is the exceptions. The regulatory requirements should be consistently applied across the board including in relation to disclosure of personal information to overseas recipients, so as for there to no incentive for entities to lower their standards of conduct. Entities should know and assume they are bound by the highest applicable requirement at all times.
The goal should be for all jurisdictions to have high standards so that most traffic is occurring in jurisdictions with similarly high requirements, eg traffic between Australia and USA, UK or Europe, etc and where traffic is not to high regulated jurisdictions, the Australian regulation standards applies. Exceptions or lower standards for overseas traffic will encourage entities to traffic data through or to countries with little or no regulation.
Notifiable Data Breaches
How can reporting processes for Notifiable Data Breaches be streamlined for APP entities with multiple reporting obligations?
Enter your response here
make it an obligation to require entities to comply with the OAIC template.
By the way there is WIDESPREAD failure to notify. Majority of breaches very likely to not be notified under the scheme.
Including by entities who breach data in individual cases, such as serious but one off breaches - many do not even know of the OAIC's existence or of the Nortifiable Data Breach scheme.
I have encountered a law practice who breached a client's data and they did not even know the OAIC existed, they arrogantly told the client to 'sue us' thinking the cost of a law suit was outside the client's capacity. They received rude awakening when the client took the matter to the OAIC and the law practice received a rapid education in ... the law.
So in reality the NDB is not being used widely, there are many entities pepetrating NDB and unaware the obligation exists.
By the way there is WIDESPREAD failure to notify. Majority of breaches very likely to not be notified under the scheme.
Including by entities who breach data in individual cases, such as serious but one off breaches - many do not even know of the OAIC's existence or of the Nortifiable Data Breach scheme.
I have encountered a law practice who breached a client's data and they did not even know the OAIC existed, they arrogantly told the client to 'sue us' thinking the cost of a law suit was outside the client's capacity. They received rude awakening when the client took the matter to the OAIC and the law practice received a rapid education in ... the law.
So in reality the NDB is not being used widely, there are many entities pepetrating NDB and unaware the obligation exists.
Should APP entities be required to take reasonable steps to prevent or reduce the harm that is likely to arise for individuals as a result of a Notifiable Data Breach? If so, what factors should be taken into account when determining reasonable steps?
Enter your response here
YES
Reasonable steps shoudl be defined as 'whatever steps necessary' to prevent the breach.
It should be a high bar for an entity to be able to prove their steps were reasonable.
Basically, the fact of the breach occurring should almost be tantamount to proof that the steps taken (if any) were not reasonable.
Remembering the focus and intent of the legislation and policy is to protect and empower the individual with limited resources from exploitation and sloppy practices by corporations who have the resources and duty to respect and care for private information.
So the legislation should always fall in favour of the individual, the person whose privacy has been breached, and not the entity doing the breach.
Reasonable steps shoudl be defined as 'whatever steps necessary' to prevent the breach.
It should be a high bar for an entity to be able to prove their steps were reasonable.
Basically, the fact of the breach occurring should almost be tantamount to proof that the steps taken (if any) were not reasonable.
Remembering the focus and intent of the legislation and policy is to protect and empower the individual with limited resources from exploitation and sloppy practices by corporations who have the resources and duty to respect and care for private information.
So the legislation should always fall in favour of the individual, the person whose privacy has been breached, and not the entity doing the breach.
Provide general feedback or upload a written submission
If you would like to provide general feedback on the Privacy Act Review Report please provide your response
Response
THANK YOU for reforming the privacy protections for individuals.
The very existence of the Review and the policy objectives it puts forward is treating Australians with respect and dignity, where before we were treated as cattle off to the abatoir for corporate greed.
So, thank you to all the departmental staff working tireless on this policy development and reading all of these submissions. :)
The very existence of the Review and the policy objectives it puts forward is treating Australians with respect and dignity, where before we were treated as cattle off to the abatoir for corporate greed.
So, thank you to all the departmental staff working tireless on this policy development and reading all of these submissions. :)