Pandas: Sampling a DataFrame

64

28

I'm trying to read a fairly large CSV file with Pandas and split it up into two random chunks, one of which being 10% of the data and the other being 90%.

Here's my current attempt:

rows = data.index
row_count = len(rows)
random.shuffle(list(rows))

data.reindex(rows)

training_data = data[row_count // 10:]
testing_data = data[:row_count // 10]

For some reason, sklearn throws this error when I try to use one of these resulting DataFrame objects inside of a SVM classifier:

IndexError: each subindex must be either a slice, an integer, Ellipsis, or newaxis

I think I'm doing it wrong. Is there a better way to do this?

Blender

Posted 2012-08-30T06:12:46.203

Reputation: 203 675

Question was closed 2016-10-30T19:11:56.443

3Incidentally, this wouldn't randomly shuffle correctly anyway - the problem is random.shuffle(list(rows)). shuffle alters the data it operates on, but when you call list(rows), you make a copy of rows that gets altered and then thrown away - the underlying pandas Series, rows, is unchanged. One solution is to call rows = list(rows), then random.shuffle(rows) and data.reindex(rows) after that. – spencer nelson – 2013-02-20T00:10:39.503

Answers

77

What version of pandas are you using? For me your code works fine (i`m on git master).

Another approach could be:

In [117]: import pandas

In [118]: import random

In [119]: df = pandas.DataFrame(np.random.randn(100, 4), columns=list('ABCD'))

In [120]: rows = random.sample(df.index, 10)

In [121]: df_10 = df.ix[rows]

In [122]: df_90 = df.drop(rows)

Newer version (from 0.16.1 on) supports this directly: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sample.html

Wouter Overmeire

Posted 2012-08-30T06:12:46.203

Reputation: 27 522

7Another approach is to use np.random.permuation – Wes McKinney – 2012-09-08T22:37:50.957

1@WesMcKinney: I notice that np.random.permutation would strip the column names from the DataFrame, because np.random.permutation. Is there a method in pandas that would shuffle the dataframe while retaining the column names? – hlin117 – 2015-03-05T20:03:43.457

4@hlin df.loc[np.random.permutation(df.index)] will shuffle the dataframe and keep column names. – Wouter Overmeire – 2015-03-06T07:22:12.290

1@Wouter Overmeire, I just tried this, and it looks like it might work fine for now, but it also gave me a deprecation warning. – szeitlin – 2015-04-08T17:01:27.613

random.sample() will cause RuntimeError: maximum recursion depth exceeded while calling a Python object if the sample length is too long. recommending np.random.choice() – redreamality – 2015-12-15T03:21:06.393

79

I have found that np.random.choice() new in NumPy 1.7.0 works quite well for this.

For example you can pass the index values from a DataFrame and and the integer 10 to select 10 random uniformly sampled rows.

rows = np.random.choice(df.index.values, 10)
sampled_df = df.ix[rows]

dragoljub

Posted 2012-08-30T06:12:46.203

Reputation: 826

with ipython timeit it takes half of random.sample time.. awesome – gc5 – 2013-11-11T16:34:08.233

+1 for use of np.random.choice. Also, if you have a pd.Series of probabilities, prob, you can pick from the index as so: np.random.choice(prob.index.values, p=prob.values) – LondonRob – 2014-01-22T19:06:18.207

38Don't forget to specify replace=False if you want sampling without replacement. Otherwise this method can potentially sample the same row multiple times. – Alexander Measure – 2014-01-30T03:55:08.803

if you'd like to sample N unique values of a column 'A' from df w/o replacement, I found the following useful: rand_Nvals = np.random.choice(list(set(df.A)), N, replace=False) – Quetzalcoatl – 2015-08-25T04:49:57.130

In my case, I wanted to repeat data -- i.e. take the list ['a','b','c'] and make this list 3,000 long (instead of 3 long). random.sample doesn't allow the result to be bigger than the input (ValueError: Sample larger than population) np.random.choice does allow the result to be bigger than the input. I might be describing a different problem than OP (who specifically says "sample" = smaller than population), but... – The Red Pea – 2015-10-28T16:54:33.953

Update: pandas uses iloc in place of ix now. So you might get a Deprecation Error if you try the old command. – istewart – 2017-09-14T03:35:08.213

23

New in version 0.16.1:

sample_dataframe = your_dataframe.sample(n=how_many_rows_you_want)

doc here: http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.sample.html

dval

Posted 2012-08-30T06:12:46.203

Reputation: 1 808

14

Pandas 0.16.1 have a sample method for that.

hurrial

Posted 2012-08-30T06:12:46.203

Reputation: 301

Nice! But you still have to load all the data in memory, right? – Nikolay – 2015-06-23T13:40:24.127

I do it after loading the data in memory. – hurrial – 2015-06-24T01:02:36.150

6

If you're using pandas.read_csv you can directly sample when loading the data, by using the skiprows parameter. Here is a short article I've written on this - https://nikolaygrozev.wordpress.com/2015/06/16/fast-and-simple-sampling-in-pandas-when-loading-data-from-files/

Nikolay

Posted 2012-08-30T06:12:46.203

Reputation: 630

look at itertools.islice – Merlin – 2015-08-12T13:50:02.200

this is the right answer to the question. – redreamality – 2015-12-15T03:23:21.583