Posted By: Anonymous
I would like to read several csv files from a directory into pandas and concatenate them into one big DataFrame. I have not been able to figure it out though. Here is what I have so far:
import glob import pandas as pd # get data file names path =r'C:DRODCL_rawdata_files' filenames = glob.glob(path + "/*.csv") dfs =  for filename in filenames: dfs.append(pd.read_csv(filename)) # Concatenate all data into one DataFrame big_frame = pd.concat(dfs, ignore_index=True)
I guess I need some help within the for loop???
If you have same columns in all your
csv files then you can try the code below.
I have added
header=0 so that after reading
csv first row can be assigned as the column names.
import pandas as pd import glob path = r'C:DRODCL_rawdata_files' # use your path all_files = glob.glob(path + "/*.csv") li =  for filename in all_files: df = pd.read_csv(filename, index_col=None, header=0) li.append(df) frame = pd.concat(li, axis=0, ignore_index=True)