8

In my open-erp application I am getting error below:

     2015-04-01 09:35:55,959 4169 ERROR new_db openerp.sql_db: bad query: ALTER TABLE "product_product" ADD COLUMN "location" VARCHAR
Traceback (most recent call last):
  File "/opt/openerp/server/openerp/sql_db.py", line 226, in execute
    res = self._obj.execute(query, params)
OperationalError: tables can have at most 1600 columns

2015-04-01 09:35:55,960 4169 ERROR new_db openerp: Failed to initialize database `new_db`.
Traceback (most recent call last):
  File "/opt/openerp/server/openerp/cli/server.py", line 97, in preload_registry
    db, registry = openerp.pooler.get_db_and_pool(dbname,update_module=update_module)
  File "/opt/openerp/server/openerp/pooler.py", line 33, in get_db_and_pool
    registry = RegistryManager.get(db_name, force_demo, status, update_module)
  File "/opt/openerp/server/openerp/modules/registry.py", line 203, in get
    update_module)
  File "/opt/openerp/server/openerp/modules/registry.py", line 233, in new
    openerp.modules.load_modules(registry.db, force_demo, status, update_module)
  File "/opt/openerp/server/openerp/modules/loading.py", line 350, in load_modules
    force, status, report, loaded_modules, update_module)
  File "/opt/openerp/server/openerp/modules/loading.py", line 256, in load_marked_modules
    loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks)
  File "/opt/openerp/server/openerp/modules/loading.py", line 165, in load_module_graph
    init_module_models(cr, package.name, models)
  File "/opt/openerp/server/openerp/modules/module.py", line 374, in init_module_models
    result = obj._auto_init(cr, {'module': module_name})
  File "/opt/openerp/server/openerp/osv/orm.py", line 3164, in _auto_init
    cr.execute('ALTER TABLE "%s" ADD COLUMN "%s" %s' % (self._table, k, get_pg_type(f)[1]))
  File "/opt/openerp/server/openerp/sql_db.py", line 161, in wrapper
    return f(self, *args, **kwargs)
  File "/opt/openerp/server/openerp/sql_db.py", line 226, in execute
    res = self._obj.execute(query, params)
OperationalError: tables can have at most 1600 columns

I remove the unnecessary columns from the product_product table, but the above error still comes.

How do I resolve it, kindly suggest me. waiting for reply. Thanks

4 Answers 4

15

I know this may be a little late (and maybe you've already found this answer in your travels) but as of this writing (Aug 2016) dropping a column from a Postgres database table does not actually remove the column from the table space, it merely hides it and the column still counts toward the table space's column limit, see Postgres' documentation on the ALTER TABLE operation.

"The DROP COLUMN form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert and update operations in the table will store a null value for the column. Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated. (These statements do not apply when dropping the system oid column; that is done with an immediate rewrite.)"

Source: https://www.postgresql.org/docs/9.5/static/sql-altertable.html

So if you have migrations or some operation that performs a DROP/ADD cycle repeatedly on a table, you will begin to exhaust the available columns until you reach the limit.

Dropping the table and recreating it, or copying the data into a new table using INSERT INTO (though you'll have to recreate foreign keys, etc.), will result in a clean table without all of the (hidden) dropped columns from previous operations. This will effectively reset your column number count.

4

A quick fix that helped me out to reduce the number of colums was dumping the database to a file, deleting the database and then recreating it as follows:

sudo su - postgres
pg_dump nameOfDatabase > backup.psql
dropdb nameOfDatabase
createdb --owner nameOfOwner nameOfDatabase
psql -d nameOfDatabase -f backup.psql
logout
1

We had the same issue when building backend with ORM, and we couldn't figure out which table was causing the issue. So I've created a script that recreates all tables and foreign keys.

DROP TABLE IF EXISTS for_keys;
    CREATE TEMP TABLE for_keys (
       table_schema text,
       current_table_name text,
       current_column_name text,
       foreign_table_schema text,
       foreign_table_name text,
       foreign_column_name text
    );
    
    DO $$
    DECLARE
       tabele text[];
       tabela text;
       temp_tabela text;
       fks for_keys[];
       fks_item for_keys;
       
    BEGIN
       SELECT array_agg(table_name) INTO tabele FROM information_schema.tables WHERE table_schema = 'public';
       FOREACH tabela IN ARRAY tabele
       LOOP
       -- Creating temporary tables
        temp_tabela := CONCAT(tabela,'_temp');
        EXECUTE format('CREATE TABLE %I (like %I including all); INSERT INTO %I SELECT * FROM %I', temp_tabela, tabela, temp_tabela, tabela);
        
       -- Extracting foreign keys
        SELECT
        tc.table_schema, 
        tc.table_name as current_table_name, 
        kcu.column_name current_column_name, 
        ccu.table_schema AS foreign_table_schema,
        ccu.table_name AS foreign_table_name,
        ccu.column_name AS foreign_column_name
        INTO
        fks_item
        FROM 
            information_schema.table_constraints AS tc 
            JOIN information_schema.key_column_usage AS kcu
              ON tc.constraint_name = kcu.constraint_name
              AND tc.table_schema = kcu.table_schema
            JOIN information_schema.constraint_column_usage AS ccu
              ON ccu.constraint_name = tc.constraint_name
              AND ccu.table_schema = tc.table_schema
        WHERE tc.constraint_type = 'FOREIGN KEY' AND tc.table_name=tabela;
        if found then
         fks := array_append(fks, fks_item);
        end if;
       END LOOP;
       
       -- Dropping old tables after safely extracting all foreign keys
       FOREACH tabela IN ARRAY tabele
       LOOP
        EXECUTE format('DROP TABLE %I cascade', tabela);
       END LOOP;
       
       -- Creating new tables and dropping temp
       FOREACH tabela IN ARRAY tabele
       LOOP
        temp_tabela := CONCAT(tabela,'_temp');
        EXECUTE format('CREATE TABLE %I (like %I including all); INSERT INTO %I SELECT * FROM %I', tabela, temp_tabela, tabela, temp_tabela);
        EXECUTE format('DROP TABLE %I', temp_tabela);
       END LOOP;
       
       -- Re-creating relations between tables
       FOREACH fks_item IN ARRAY fks
       LOOP
        EXECUTE format('ALTER TABLE %I ADD FOREIGN KEY (%I) REFERENCES %I(%I)', fks_item.current_table_name, fks_item.current_column_name, fks_item.foreign_table_name, fks_item.foreign_column_name);
        raise notice 'Created relation: %', CONCAT(fks_item.current_table_name,'_',fks_item.current_column_name,' -> ',fks_item.foreign_table_name ,'_', fks_item.foreign_column_name);
       END LOOP;
       
    END $$;
0

It's because all the columns are there in ir.model.fields table, which is one2many in relation with ir.model. You need to remove from there.

You can also remove columns from the UI.

Go to Settings -> Database structure -> Models

Find your model and remove fields from there.

I hope it helps you.

10
  • but my application is stuck because of this error, how can I resolve it?
    – user88
    Commented Apr 1, 2015 at 10:16
  • connect postgresql from terminal and fire query there, and remove columns from the table ir_model_fields only remove unnecessary columns. to know how many fields are there in model fire this query, select id, name, ttype, field_description ,model, relation from ir_model_fields where model='product.product'; Commented Apr 1, 2015 at 10:41
  • I had removed unnecessary columns from the table product_product but the error still come.
    – user88
    Commented Apr 1, 2015 at 11:01
  • Go to file /opt/openerp/server/openerp/sql_db.py Line 225, and there, just type "print %s" %(query) then restart server and check that error message and paste here that error message, so we can get idea from which table system finds errors and from which table we have to remove columns. Commented Apr 2, 2015 at 5:30
  • I write for print print "\n\n\n query is ", query and it shows me error below: query is ALTER TABLE "product_product" ADD COLUMN "location" VARCHAR 2015-04-02 06:23:12,382 3545 ERROR new_db openerp.sql_db: bad query: ALTER TABLE "product_product" ADD COLUMN "location" VARCHAR Traceback (most recent call last): File "/opt/openerp/server/openerp/sql_db.py", line 227, in execute res = self._obj.execute(query, params) OperationalError: tables can have at most 1600 columns
    – user88
    Commented Apr 2, 2015 at 6:26

Not the answer you're looking for? Browse other questions tagged or ask your own question.