Thursday, February 16, 2017

Trends in the data: Changing attitudes towards divorce in Georgia

CRRC’s Caucasus Barometer data show that assessments of whether divorce can or cannot be justified are changing in Georgia. This blog post looks at this trend, and at how these assessments differ by gender, age, and settlement type.

The share of those who report that divorce can be justified has increased since 2011, while the share of those who think divorce cannot be justified decreased, as did the share of those who answered “Don’t know”. Notably, both men and women report similar assessments (2011, 2013, 2015).

Note: The original 10-point scale was re-coded into a 3-point scale, with original codes 1 through 4 labeled “Cannot be justified”, codes 5 and 6 labeled “Neutral”, and codes 7 through 10 labeled “Can be justified” on the chart above.

Unsurprisingly, residents of Tbilisi report more frequently that divorce can be justified, compared to people living outside the capital. Outside Tbilisi, the most frequent responses are that divorce cannot be justified. In Tbilisi “neutral” assessments became most frequent in 2015.

Although people who are 56 and older report most often that divorce cannot be justified, such assessments have gradually become less common even for people in this age group, decreasing by nine percentage points since 2011. The sharpest decrease is among those who are between 36 and 55 years old.

Overall, the opinion that divorce cannot be justified remains prevalent in Georgia. Nonetheless, the share of those who report that divorce can be justified is growing, and the share of those who report it cannot be justified is declining. This is true for residents of different settlement types, both males and females, and across age groups, although the attitudes of older people and those living in rural settlements are changing less.

To have a closer look at the Caucasus Barometer data, visit CRRC’s Online Data Analysis tool.

Monday, February 13, 2017

One in four in Georgia report taking antidepressants or antibiotics without a prescription

On a 2016 CRRC survey conducted for Transparency International Georgia (TIG), one in four adults in Georgia reported taking either antidepressants or antibiotics without a doctor’s prescription during the 12 months before the survey. Self-medication with anti-depressants can cause serious problems. While anti-depressants may have side-effects even when taken under the supervision of a physician, the risks are higher without a doctor’s care. Self-medication with antibiotics is also problematic. If taken improperly, they can contribute to the creation of antibiotic-resistant bacteria, which is often thought to be one of the world’s most pressing health problems. Surely, when a quarter of the population reports taking these types of drugs without the supervision of a physician, it represents a public health concern, one of many in Georgia. Current regulations clearly fail to prevent such practices. This blog post looks at which groups in the population report more often taking antidepressants or antibiotics to self-medicate.

Women are significantly more likely to report taking antibiotics or antidepressants without a doctor’s prescription compared to men. While 21% of men reported self-medicating with these kinds of pharmaceuticals the year before the survey, 30% of women did so. A simple cross tabulation suggests the problem is largest in rural settlements. Men in different settlement types are equally likely to report taking these drugs without a prescription, while women in rural settlements are most likely to report doing so.

Note: Only the shares of those answering “Yes” are shown on the charts in this blog post. 

At first glance, the older population (56 and older) in general appears more likely to take anti-depressants and antibiotics without a doctor’s prescription. Similar to the finding with settlement types, men of different ages are equally likely to do so, while women who are 36 years old or older report to be doing so most often.

With the use of antibiotics and antidepressants, a doctor’s supervision is particularly important. Many in Georgia still take such drugs without it, and the practice is more common among women than men. In order to begin to deal with this issue, the government of Georgia, and, in particular, the Ministry of Labor, Health and Social Affairs, as well as civil society organizations should increase awareness of the problems associated with self-medication with anti-depressants and antibiotics, especially among women.

The data which this blog post reports on will soon be available at our Online Data Analysis tool, and is currently available for download here.

Monday, February 06, 2017

The state procurement system in Georgia: Companies’ views (Part 2)

The first part of this blog post presented findings about Georgian companies’ participation in the state procurement system. This post provides an overview of companies representatives’ assessments of the state procurement system and how these assessments differ depending upon the company’s participation or non-participation in the state procurement process.

Company representatives report that their main sources of information about state procurement include the websites of procuring state entities (17%) and the State Procurement Agency’s Unified Electronic System (16%). However, representatives of companies that do not participate in state procurement report that they do not or cannot get information about the state entities’ procurement tenders.

Note: The number of answer options was not limited during the interviews.

Most companies (60%) report trusting the State Procurement Agency, with only a small share (8%) saying they do not trust it. At the same time, almost a third of companies (30%) say they don’t know whether they trust the Agency or not. Notably, representatives of companies which have never bid on state procurement tenders answer “don’t know” more frequently than representatives of companies that have. Moreover, representatives of the companies which have participated in the state procurement process at least once report trusting the Agency more often than the non-participating companies. This result likely indicates that the participating companies’ experiences working with the State Procurement Agency were positive.

Responses to the question, “How much do you agree or disagree that companies which have connections with the government win tenders?” also vary based on whether companies have experience participating in the state procurement system. Representatives of companies that have such experience disagree with this statement more often. Moreover, representatives of companies which have not participated in the state procurement system respond “don’t know” twice as often as participating companies.

Around one third (36%) of the companies that have participated in state procurement think that tenders are often “tailored” to one specific company and the goal is to ensure that this particular company wins. However, almost half (48%) of the participating companies do not agree with this statement. While 27% of the companies which have participated in the state procurement system agree with the statement that tenders may be repeatedly rejected by the procuring side with the aim of awarding the contract to a particular company through simplified tender, almost half disagree with this statement.

Note: This question was asked to the representatives of the 17% of companies that have participated in the state procurement system.  

As the findings presented in this blog post show, representatives of companies that have participated in the state procurement system at least once assess the State Procurement Agency’s work more positively. Most of them trust the Agency and believe that there is no need to have connections with the government to win tenders. Most representatives of companies that have not participated in the state procurement system have difficulty stating their opinion on these issues. The findings presented in this blog post indicate that the State Procurement Agency should do more to raise awareness among companies about the state procurement system.

To explore this topic more, take a look at CRRC-Georgia’s report Survey of companies on state procurement. 

Friday, January 27, 2017

The state procurement system in Georgia: Companies’ views (Part 1)

The Unified Electronic System for State Procurement was introduced in Georgia in 2010. The system aimed to simplify the state procurement process and make it transparent. According to the State Procurement Agency, “Every year, the state spends hundreds of millions of lari on procurement of different kinds of goods, services and construction. … Accordingly, private companies ought to be interested in state procurement as an important potential source of increasing their incomes.” However, according to the findings of a Survey of companies on the state procurement system conducted by CRRC-Georgia for Deloitte Consulting LLP and USAID in August 2016, a majority of companies do not actively participate in the state procurement process. Based on CRRC-Georgia’s report on the subject, this blog post discusses problems with the system in the companies’ views.

According to the State Procurement Agency’s 2015 annual report, 15.6% of active companies have bid on state procurement tenders (pg.17). On CRRC-Georgia’s survey 17% of companies report taking part in the state procurement process, and approximately half of these companies report doing so only sometimes or rarely. Seventy-three percent of companies report not being registered in the Unified Electronic System for State Procurement (UES), which is a requirement for bidding on state procurement tenders.

Note: 2% of companies, whose representatives answered “Don’t know” to the question: “Is your company registered in Unified Electronic System for State Procurement?” were excluded from the analysis. 

The results of the survey provide some insight about why companies do not participate in state procurement. Most frequently (56%), a lack of interest in participating in the state procurement process was mentioned as the main reason for not participating. We do not, however, have any information about why there is a lack of interest. The second most common reason company representatives mentioned for not participating was that the tenders announced are not applicable to the company’s field of activity (27%).

Note: Only answers of the representatives of companies that are not registered in the Unified Electronic System for State Procurement (73%) are presented in the chart above. 

A majority of companies (64%) report having no information about the announcement of state procurement tenders. Given this general lack of information, it is not surprising that their representatives found it difficult to assess how fairly different types of tenders are conducted. Notably, representatives of 76% of the companies report they have not heard about seminars which the State Procurement Agency conducts with the aim of increasing the knowledge of business people about the state procurement system.

Note: The chart shows distribution of answers of the 83% of companies that have not participated in state procurement system. Companies whose representatives answered “Refuse to answer” to the question are excluded from analysis. There are five types of state procurement tenders in Georgia: simplified procurement, simplified electronic tender, electronic tender, consolidated tender and contest. Definitions of each type of tender are available here.

It is possible that the lack of information is an obstacle to greater participation in state procurement processes. Thus, the State Procurement Agency should better inform companies about its activities.

The second part of this blog post, which will be published next Monday, shows how representatives of companies assess the state procurement system based on whether they have or have not participated in the state procurement system.

Tuesday, January 24, 2017

Developing the “culture of polling” in Georgia (Part 2): The misinterpretation and misuse of survey data

[Note: This is the second part of a guest blog post from Natia Mestvirishvili, a Researcher at International Centre for Migration Policy Development (ICMPD) and a former Senior Researcher at CRRC-Georgia. The first part of this blog post is available here. This post was co-published with the Clarion]

The misinterpretation of survey findings is a rather widespread problem in Georgia. Unfortunately, it often leads to the misuse of data, which not only diminishes the importance of survey research, but also leads to more serious consequences for the country.

To illustrate how one might misinterpret survey data, the following example from CRRC’s 2015 Caucasus Barometer survey can be used. When asked, “What do you think is the most important issue facing Georgia at the moment?”, only 3% of the population mentioned low pensions, 2% the unaffordability of healthcare, and 2% the low quality of education. A number of issues including the violation of human rights, unfairness of courts, corruption, unfairness of elections, unaffordability of professional or higher education, the violation of property rights, gender inequality, religious intolerance and emigration were grouped into the category “Other”, because, in total, only 7% of the population mentioned these issues.

Based on these findings, one might think that these issues are unimportant in Georgia. However, this would be a misinterpretation, which happens for a number of reasons. Here, I focus on two. The first is:

1. Not paying attention to the exact formulation (wording) of the survey question, answer options, and instructions 

One reason a large share of the population did not mention the violation of human rights, gender inequality and religious intolerance as important issues is because each respondent could name only one issue. The options they chose (unemployment and poverty were named most often) were more important to them than human rights, gender inequality, and religious intolerance.

If a different question – “How important is the issue of human rights [or gender inequality, or religious intolerance] for Georgia?” – had been asked, the share of people who would answer that these issues are important would very likely be much higher than one or two percent. This wording would make people judge the issue not in relative, but absolute terms.

While working with survey findings, the exact wording of question(s) should always be taken into account. When the question is interpreted or reworded, it will almost inevitably lead to some degree of misinterpretation. More often than not, fieldwork instructions should also be taken into account. For example, was a show card used for the question? Was the number of answer options a respondent could choose limited or not?

Thus, it is crucial that survey results are understood and reported, keeping in mind the exact wording of the question(s), answer options provided, and any instruction(s) that had to be followed during the interviews. This will help minimize the risk of misinterpretation.

A second common cause of misinterpretation of public opinion polls in Georgia is:

2. Interpreting public opinion survey results as ‘reality’ rather than perceptions 

Even if the question discussed above had been asked so that the absolute rather than relative importance of the issues was measured and the survey findings still suggested that people thought the violation of human rights, gender inequality and religious intolerance were not important issues for the country, the findings should not be interpreted as a direct reflection of ‘reality.’ As discussed in the first part of this blog post, public perceptions are not ‘reality’.

Interpreting public perceptions as objective ‘reality’ is incorrect, because both perceptions and misperceptions, information and misinformation shape public opinion. It is equally important to remember that, sometimes, ‘reality’ simply does not exist. Moreover, as a number of studies have shown, it is often the case that people are simply wrong about a wide variety of things.

None of the above, however, diminishes the role and importance of public opinion polls. In fact, the misperceptions that survey findings can uncover are often among the most important outcomes for policymakers. Instead of putting an equal sign between public perceptions and ‘reality,’ data analysts and policymakers should critically analyze and address gaps between the two.

Going back to the above example, an accurate interpretation would consider the findings in the context of other studies that are specifically focused on human rights (or gender equality or religious tolerance). Indeed, numerous studies indicate that Georgia has serious problems with all three issues i.e., the population does not have much respect for human rights, gender equality, or people of other religions. Only looking at the latest Human Rights Watch report on Georgia makes this quite clear.

Looking at inconsistencies between people’s answers to different questions, or between survey findings and other types of data when available and relevant, is a good way to uncover misperceptions. For example, a 2014 CRRC/NDI survey found that roughly every fourth person reports there is gender equality in Georgia. However, about half of those who think so also think that taking care of the home and family makes women as satisfied as having a paid job, and that in order to preserve the family, the wife should endure a lot from her spouse.

The answers to these three questions should be presented and discussed not separately, as independent findings, but rather as interrelated findings that, taken together, give a better understanding of the assessments of and attitudes towards gender equality in Georgia. In this context, the question that needs to be raised and answered is why and how this inconsistency between answers occurs.

The misuse of survey findings happens when findings are presented and used in a way that reinforces people’s misperceptions and prejudices. The misinterpretation of findings often leads to their misuse, and eventually, can lead to serious issues.

Again, going back to the most important issue example, it would be a misuse of survey findings to conclude that since the violation of human rights, religious intolerance or gender inequality seem to not be perceived as important issues in Georgia, no policy is needed to address them. As demonstrated above, alternative sources show that these issues need to be addressed, and, at the very least, awareness of them needs to increase. Thus, policy intervention is needed.

What the survey findings tell us in this case is that people underestimate the importance of these issues. In turn, this contributes to the worsening of the problems. If you believe gender inequality or religious intolerance are not important, you probably would not care about these issues either. Thus, the larger is the gap between public perceptions and reality, the more important it is for policy makers to address the issue.

Public opinion should not be used as a directive for policy making without careful analysis of misperceptions and alternative sources of information.

Unfortunately, in Georgia sometimes it’s exactly the misperceptions that drive policy. Speaking of recent developments, misperceptions about homosexuality have lead politicians to talk more about the prohibition of same-sex marriage, something that has never been allowed in Georgia in the first place, than about human rights issues. Misperceptions about gender roles led politicians to reject a proposal that would define femicide as a premeditated murder of a woman based on her gender. Looking forward, the country cannot allow the misperception that the EU threatens Georgia’s traditions to drive the country’s foreign policy.

Now more than ever, when Georgia is still attempting to transition into a stable, democratic country, the country needs policymakers and researchers who have the knowledge and skills to critically analyze survey findings and use their potential for the development of the country.

Monday, January 16, 2017

Developing the “culture of polling” in Georgia (Part 1): Survey criticism in Georgia

[Note: This is a guest post from Natia Mestvirishvili, a Researcher at International Centre for Migration Policy Development (ICMPD) and former Senior Researcher at CRRC-Georgia. This post was co-published with the Clarion.]

Intense public debate usually accompanies the publication of survey findings in Georgia, especially when the findings are about politics. The discussions are often extremely critical or even call for the rejection of the results.

Normally criticism of surveys would focus on the shortcomings of the research process and help guide researchers towards better practices to make surveys a better tool to understand society. In Georgia most of the current criticism of surveys is, unfortunately, counterproductive and mainly driven by an unwillingness to accept the findings, because the critics do not like them. This blog post outlines some features of survey criticism in Georgia and highlights the need for constructive criticism aimed at the improvement of research practice, because constructive criticism is extremely important and useful for the development of the “culture of polling” in Georgia.

Often, discrepancies between the findings and the critics’ opinion about public opinion cause criticism of surveys in Georgia. Hence, the survey critics claim that the findings do not correspond to ‘reality’. Or rather, their reality.

But, are surveys meant to measure ‘reality’? For the most part, no. Rather, public opinion polls measure and report public opinion which is shaped not only by perceptions, but also by misperceptions i.e., the views and opinions that people have. There is no ‘right’ or ‘wrong’ opinion. It is equally important that these are opinions that people feel comfortable sharing during interviews –while talking to complete strangers. Consequently, and leaving aside deeply philosophical discussions about what reality is and whether it exists at all, public opinion surveys measure perceptions, not reality.

Among the many assumptions that may underlie criticism of surveys in Georgia, critics often suggest that:

  1. They know best what people around them think;
  2. What people around them think represents the opinions of the country’s entire population. 

However, both of these assumptions are wrong, because, in fact:

  1. Although people in general believe that they know others well, they don’t. Extensive psychological research shows that there are common illusions which make us think we know and understand other people better than we actually do – even when it comes to our partners and close friends;
  2. Not only does everyone have a limited choice of opinions and points of view in their immediate surroundings compared to the ‘entire’ society, but it has also been shown that people are attracted to similarity. As a result, primary social groups are composed of people who are alike. Thus, people tend to be exposed to the opinions of their peers, people who think alike. There are many points of view in other social groups that a person may never come across, not to mention understand or hold; 
  3. Even if a person has contacts with a wide diversity of people, these will never be enough to be representative of the entire society. Even if it were, individuals lack the ability to judge how opinions are distributed within a society.

To make an analogy, assuming the opinions we hear around us can be generalized to the entire society is very similar to zooming in on a particularly large country, like Canada, on a map of a global freedom index, and assuming that since Canada is green, i.e. rated as “Free”, the same is true for the rest of the world. In fact, if we zoom out, we will be able to see that the whole world is all but green. Rather, it is very colorful, with most of the countries being of different colors than green, and “big” Canada is no indication of the state of the rest of the world.


People who think that what people around them think (or, to be even more precise – who think that what they think that people around them think) can be generalized to the whole country make a similar mistake.

Instead of objective and constructive criticism based on unbiased and informed opinions and professional knowledge, public opinion polls in Georgia are mostly discussed based on emotions and personal preferences. Professional expertise is almost entirely lacking in those discussions.
Politicians citing questions from the same survey in either a negative or positive context, depending on whether they like the results or not, is a good illustration of the above claim. For example, positive evaluations of a policy or development by the public is often proudly cited by political actors without doubting the quality of the survey. At the same time, low and/or decreasing public support for a particular party according to the findings of the same survey is “explained away” by the same actors as poor data quality. Subsequently, politicians may express their distrust in the research institution which has conducted the survey.

In Georgia and elsewhere, survey criticism should be focused on the process of research and should be aimed at its improvement rather than the rejection of the role and importance of polling. It is the duty of journalists, researchers and policymakers to foster healthy public debate on survey research. Instead of emotional messages aimed at demolishing trust in public opinion polls and pollsters in general, rationally and carefully discussing the research process and its limitations, research findings and their meaning/significance and, where possible, pointing to possible improvements of survey practice is needed.

Criticism focused on “unclear” or “incorrect” methodology should be further elaborated by professionally specifying the aspects that are unclear or problematic. Research organizations in Georgia will highly appreciate criticism that asks specific questions aimed at improving the survey process. For example, does the sample design allow for the generalization of the survey results to the entire population? How were misleading questions avoided? How have the interviewers been trained and monitored to minimize bias and maximize the quality of the interviews?

This blog post argued that survey criticism in Georgia is often based on inaccurate assumptions and conveys messages that are not helpful for research organizations from the point of view of improving their practice. These messages are also often dangerous as they encourage uninformed skepticism towards survey research in general. Rather than these unhelpful messages, I call on actors to engage in constructive criticism which will contribute to the improvement of the quality of surveys in Georgia, which in turn will allow people’s voices to be brought to policymakers and their decisions to be informed by objective data.

The second part of this blog post, to be published on January 23, continues the topic, focusing on examples of misinterpretation and misuse of survey data in Georgia.

Tuesday, January 10, 2017

Sex selective abortion is likely less common in Georgia than previously thought

[This blog post was co-published with EurasianetThe views presented in this article do not necessarily reflect the views of CRRC-Georgia.]

Sex-selective abortion in Georgia is a topic that has caught international attention. From an Economist article published in September 2013 to a 2015 UN report, Georgia tends to be portrayed as having one of the worst sex-selective abortion problems in the world. Closer inspection of the data, however, suggests the issue may be blown out of proportion.

The first study to draw attention to the sex-selective abortion issue in Georgia was published in 2013 in the journal International Perspectives on Sexual and Reproductive Health, and relied on statistics compiled by the World Health Organization. The authors found a sex-at-birth ratio of 121 boys for every 100 girls born in Georgia from 2005-2009. That number suggested there was a problem: one of the most common estimates of the natural sex-at-birth ratio is 105 boys for every 100 girls, or 95.2 girls for every 100 boys. Any difference between the natural and observed ratios in favor of boys is generally thought to be an proxy for sex selective abortion.

The study suggested that Georgia had one of the largest sex selective abortion problems in the world.

However, a missing data issue, a rounding error, and an anomalous sex at birth ratio in 2008, in the original study drove up the reported sex at birth ratio in Georgia. In the article, the sex at birth ratio between 2005 and 2009 is actually the average of the ratios in 2005 and 2008.  Martin McKee, one of the co-authors of the study stated, "The figure of 121 boys to 100 girls in 2005-2009 was calculated on the basis of the data submitted to the WHO at the time, from which several years were missing."

The missing data had a very large effect on the results of the study. In 2008, the ratio of boys to girls born in Georgia was exceptionally high at 128 boys born for every 100 girls. In 2005, 113 boys were born for every 100 girls, another high year for Georgia. Using these two years leads to an average of 120 boys born for every 100 girls between 2005 and 2009.

Notably, when asked about the discrepancy between the article reported 121 boys and the 120 boys to 100 girls ratio in the data, McKee acknowledged, “A very small rounding error crept in.”

With the full data between 2005 and 2009, however, the average sex at birth ratio drops to 113 boys for every 100 girls, rather than 120 – about half the reported deviation from the natural rate.

On top of the missing data, the fact that 2008’s sex at birth ratio is an outlier further exaggerates the reported magnitude of sex selective abortion in Georgia. If between 2005 and 2009, the average ratio was 113 boys for every 100 girls, the average ratio for the same period excluding 2008 is 110 boys for every 100 girls. That is to say, by excluding 2008, there were 5 excess boys born for every 100 girls rather than 8.

To flip the statistic around by looking at the ratio of girls born for every 100 boys, the average between 2005 and 2009 was 88 including 2008 and 91 when excluding it.  Translating this into the number of missing girls by subtracting the number of girls expected from the number born according to official data, suggests 6.74 missing girls for every 100 boys born when including the 2008 data. Without 2008, this drops to 4.20.

The exact causes of the situation recorded in 2008 are unknown. Although a higher than natural sex at birth ratio favoring boys is often explained by sex selective abortions and infanticide, comparing an estimate of the number of missing girls to the number of abortions over time suggests that some other factor may be at work.

Dividing the number of missing girls by the number of abortions in a year provides an estimate of the share of abortions that would need to be sex selective for it to explain the sex at birth imbalance. These calculations would suggest that sex selective abortion increased from 6% of all registered abortions in 2007 to 24% in 2008.

The calculations suggest one of three things: there was a dramatic increase in sex selective abortions in 2008, the number of unregistered abortions dramatically increased and they were also predominantly sex selective, or something else was driving the anomalous sex at birth ratio.

In the other category, many possible explanations exist. Notably, given the often poor state of data collection at the municipal level in Georgia, where births are recorded, recording error could explain the discrepancy.

The data alone cannot tell us whether 2008 saw a dramatic increase in the number of sex selective abortions or something else drove the anomalous sex at birth ratio. What is clear is that Georgia’s problem with sex-selective abortion is smaller than often portrayed.

That isn’t to say it is not a problem. In 2015, there were still about 4 missing girls for every 100 boys born.

Understanding the magnitude of the problem though is a first step towards addressing it.

Dustin Gilbreath is a Policy Analyst at CRRC-Georgia. He co-edits the organization’s blog Social Science in the Caucasus

To view the data used to calculate the figures used in this article, click here.