Optimize SQL update query

I have an update which takes a lot of time to finish. 10 millions of rows need to be updated. The execution ended after 6 hours.

This is the query :

update A
set a_top = 'N'
where (a_toto, a_num, a_titi) in 
(select a_toto, a_num, a_titi 
 from A
 where a_titi <> 'test' and a_top is null limit 10000000);

Two indexes have been created :

CREATE UNIQUE INDEX pk_A ON A USING btree (a_toto, a_num, a_titi)
CREATE INDEX id1_A ON A USING btree (a_num)

These are the things I already checked :

  • No locks
  • No triggers on A

The execution plan shows me that the indexes are not used, would it change anything if I drop the indexes, update the rows and then create the indexes after that ?

Is there a way of improving the query itself ?

Here is the execution plan :

Update on A  (cost=3561856.86..10792071.61 rows=80305304 width=200)
->  Hash Join  (cost=3561856.86..10792071.61 rows=80305304 width=200)
Hash Cond: (((A.a_toto)::text = (("ANY_subquery".a_toto)::text)) AND ((A.a_num)::text = (("ANY_subquery".a_num)::text)) AND ((A.a_titi)::text = (("ANY_subquery".a_titi)::text)))
->  Seq Scan on A  (cost=0.00..2509069.04 rows=80305304 width=126)
->  Hash  (cost=3490830.00..3490830.00 rows=2082792 width=108)
->  Unique  (cost=3390830.00..3490830.00 rows=2082792 width=108)
  ->  Sort  (cost=3390830.00..3415830.00 rows=10000000 width=108)
        Sort Key: (("ANY_subquery".a_toto)::text), (("ANY_subquery".a_num)::text), (("ANY_subquery".a_titi)::text)
        ->  Subquery Scan on "ANY_subquery"  (cost=0.00..484987.17 rows=10000000 width=108)
  ->  Limit  (cost=0.00..384987.17 rows=10000000 width=42)
    ->  Seq Scan on A A_1  (cost=0.00..2709832.30 rows=70387600 width=42)
          Filter: ((a_top IS NULL) AND ((a_titi)::text <> 'dlabcbe'::text))
(12 rows)

Thanks for you help.